We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.
Hello, We have currently set up a hybrid environment for our TX deployments. A DEV environment in on premise MSSQL database and test and prod in Snowflake. I have already come across situations like hashing where the built in functionality is not usable in Snowflake and if you create a custom field with a hashing algorithm it is not possible to deploy from MSSQL to Snowflake becuase of the different language required to create the hash. I have also com across this when writing custom scripts. In those cases so far i have been able to create code that works in both environments. Is there any way to handle this more effeciently when running a hybrid set up like this? We are trying to reduce unneccessary costs in Snowflake during development hence the hybrid set up. I have read some things about Instance variables but it would be great if there is some kind of best practice in a set up like this.
Hi all, Does anyone have experience with or managed to do a “dynamic” kind of flexi multi key join in TimeXteder? Example like this --- a.[BUYER_GROUP_CD], b.[BUYER_GROUP] FROM [DSA].[SYNAPSE_SILVER] a LEFT JOIN [DSA].[XLS_PURCHASING_GRP] b ON a.[BUYERGRP] = b.[BUYER_GROUP] OR ON a.[BUYER_GROUP_DESC] = b.[BUYER_GROUP_ID] OR ON a.[BUYER = b.[BUYER_GROUP_WORKSPACE_ID]
Hello, We have a case in TimeXtender Classic where we need to read more than 256 Columns from the Excel connector. My understanding is that the underlying driver does support it, but by default it will only Return the first 256 Columns, and thus that is all we see when reading objects from the DataSource. Does anyone know if there is a way to expose more columns beyond the 256 already shown? Thanks Tim
Designating a field as “raw-only” allows for the field to be hidden from the valid version of the table but still available for use as the basis for other fields, which can help streamline the valid version of the table. As the name implies, a raw-only field is a field that only exists in the raw version of the table and not the valid one. Raw-only fields will not be displayed in other Data Areas or Deliver instances that only use the valid part of the table. Hiding a field from the valid version of a table can be appropriate for fields that have no other purpose than to be part of other fields, e.g., used to create a surrogate key for dimensional modeling. Use the following steps to designate a field as raw-only: Right click the field and click Raw-only field. Raw-only fields are displayed with an “R” in their icon. Raw-only fields cannot be added when dragging fields from one data area to another. Raw-only fields also cannot be added when attempting to drag fields to a Deliver instan
Hello, Our data source owner team needs to close their Synapses behind the firewall to close the Prisma Clouds alerts. Therefore they are collecting all the IP ranges needed for different access scenarios, like Zscaler IP ranges for getting access from VPN/Zscaler. TimeXtender is the only solution outside Azure resources in their eyes that needs to access the data.So we need to listed those ranges. Do you know where to get these ranges for TXD cloud portal,? Or I was understanding it wrong, it is just the IP that hosts Ingestion server connection? Thank you in advance.
Hi everyone, I’m working with TimeXtender Data Integration version 6848.1 and my ODX storage is an Azure Data Lake using Parquet format. When I trigger a new reload from the source to the ODX (which should create a new incremental file in the background), I get the following error: The table folder 'x' is used by another table with the id 'y' I found this article that describes the issue: The table folder is used by another table The suggested solution there is to delete the existing folder in the data lake. Unfortunately, that’s not an option in my case because the source system only holds two weeks of data, while I have built up historical data in the data lake that I cannot afford to lose. This means every time I run a reload, the process fails as it tries to create a new incremental file for that table. Has anyone encountered this scenario and found a workaround that does not involve deleting the existing folder? Is it possible to remap the table or adjust the configuration so the
I just upgraded TimeXtender to the latest versions for a client. (7017 and 7047). In doing so most of the datasource stopped working unless I upgraded the datasources to the latest versions. I decided that I should just do them all to be consistent. This has broken my CData Rest sources because the new drivers are not installed! This is a massive issue, and i was not warned in any significant way about this particular issue in the upgrade. I need to get the new CData drivers, I am not in a position where I can transition multiple CData Rest sources as part of an upgrade! This is a production issue and I need to resolve it ASAP!
Hi, We're observing strange behavior with the new enhanced REST data source provider. When we do the call on this endpoint in the old connector or in Postman, we get a return of 197 records. This is a single payload, 1 page, with fully filled rows, including a populated PK field. Now, when we do the same call with the enhanced REST data source provider we receive 589 records. 392 of these have null values on all records except for one field, a descriptive but not otherwise important. Importantly the PK field of these records is also null. Now we have two questions: Why is the new REST provider delivering more records than other providers/tools? How can we effectively filter out these records in the actual call? As it seems we are now only able to successfully filter out these almost empty records at the DSA. Best, Luuk
Once you have set up your data warehouses and business units and established connections to the sources you want to use, it is time to start designing your data warehouse. This involves transferring data from the source systems to the data warehouse(s) via a staging database or ODX and applying transformations and data cleansing rules along the way. It is common to set up more than one Data Warehouse to accomplish all of the transformations needed to get your data into the single form of truth that will be the semantic layer. The first data warehouse is normally referred to as the Data Staging Layer or DSA. This is the first layer where data from your data sources is first loaded, and where you want to make sure all of your primary keys, data types, and incremental loading rules as appropriate are configured. The second data warehouse is normally referred to as the Modern Data Warehouse or MDW. It is in this layer where the more complex transformations, lookups, and inserts are impleme
Hi In an effort to resolve intermittent ODX errors whilst retrieving data back from Oracle JDE tables I am trying to understand whether reducing the ‘concurrent execution threads’ setting in the Oracle data source connections could be a potential solution. I have multiple Oracle connections set up in TX, with each connection pulling a subset of tables (e.g. all the JDE master file tables in one connection, all the transaction-related GL tables in another etc). All these connection transfer tasks are then wrapped up in a TX job scheduled to run each evening. The ‘concurrent execution threads’ default value across all the Oracle data connections I had set up for this was ‘8’. My thinking was that if TimeXtender is running multiple data pulls simultaneously then this could be causing resource contention issues on the source Oracle server, and thereby intermittently triggering data pull failures with generic error messages that don’t precisely indicate root cause. I have since changed the
Hello, Since the TimeXtender Classic (V25) was announced to be released by the end of Q2 2025, i would like to ask if there is a clarified date when the release is planned. To my knowledge it has not been released as of yet? Also the more important question to me is the upgrade process. Currently we are using TimeXtender classic together with ODX server and we have couple of questions: Will there be a detailed guide released on upgrading older TimeXtender platforms (20.10.52 in our case) to Classic V25? What will happen with all the data sources in our ODX instance? Since we are using quite some CDATA providers - does it mean we will have to manually recreate the data sources? Or ODX will be deprecated and we need to move to using BusinessUnits? Thanks for the information
Dear All, We have a “Dynamics Business Central (NAV)” data source with multiple adapters (multiple ERPs). We recently needed to “synchronize” the object to get the last BC evolution (new fields, ...). Unfortunately, the “Display name” of some tables/fields changed. 😒 Ex: -“Lot” “Lot No_ Information” table is now named “Lot No. Information” -“Demand Forecast” “Production Forecast Entry” table is now named “Demand Forecast Entry” Our default was to use “Display Name” (instead of Database Name) in the data selection and staging deployment. Our main concern is our significant number of sql views associated to these selected tables/fields with “hardcoded” name. We know that is recommended to use parameters to avoid problem with renaming but we didn’t implement the params in this case missing time and practicality with the sql code. 😥 Do we have an alternative solution without significant adjustments in our tables/fields selections and associated staging sql custom views ? TXT version: OnP
Learn about troubleshooting techniques
Find a Partner that fits your needs!
Submit a ticket to our Support Team
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.