Recently active
Hi,I'm using the TimeXtender CSV Data Source provider and was wondering if it's possible to combine this provider with either incremental load settings or history settings. I’ve tried enabling incremental load—both in the ODX and by setting the table as incremental in the first data area—but I receive the error: "The provider does not support incremental load." Interestingly, I don't get the same error when I set the table as a history table instead, but when I try to execute the table, the result is empty. I'm wondering if this could be due to incorrect natural key configuration from my side, or if the CSV provider simply doesn't support historical loading. Any clarification would be appreciated. Thank you!
Hey everyone, I have the following question: Is it possible, at the database level, to set the primary key to the one selected in TX (e.g., "Umkod_ID") instead of the default "DW_ID"? The issue arises because we access our table via PowerApps, and since the app existed beforehand, references in the code were already predefined. Currently, the app's primary key is set to the "Umkod_ID" field of the table. However, in our database, the primary key is set to the system field "DW_ID." Since this table is only loaded once and no transformations occur, the system fields are not particularly important to us in this case. We resolved this issue by manually populating the table, but I am curious if anyone knows a workaround that could be applied to similar cases in the future.
Hi, We are currently experiencing a problem with using multiple BC (NAV) Adapters at the same time. The two connectors are set up with individual app registrations as one is in the same Azure Tenant as where we are kicking off the execution, and one is in a different tenant all together. When I execute these tables one at a time or one source at a time, it runs smoothly with no issues. The issue is trying to run both sources at the same time. When I try that, it gives the following error: The connectors are set up correctly as I am able to run them seperately. Does anyone have a fix or workaround for this issue? For the record: we are on Version 20.10.44.64 and we are using Business Units.
Hi,what is the best (performing) way to extract data from SAP datasphere?I read about an API connection (OData), but is this the ‘optimal’ way to do this ? https://community.sap.com/t5/technology-q-a/how-to-export-data-from-sap-datasphere-or-its-database-sap-hana-cloud-to/qaq-p/13708728 https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html Best Regards,Peter
Hello, We have our data warehouse setup on TXD Portal.Our data source is with provider Azure Data Factory - SQL Server and the data loaded in our Ingest instances is in Azure data lake storage. Then the data is processed in our prepare instances also in Azure SQL database. The question is now from the data source we have some columns being encrypted via AES_ENCRYPT pySpark function with 16bit key. Is there a possibility to decrypt the data anywhere from our end in ingest instances or prepare instances. For example to add a customized script somewhere (preferably in pySpark as well using AES_DECRYPT function) ? Otherwise, is there encryption method that can be applied from the data source and by that we can decrypt more easily in TXD in SQL Server? Looking forward to your suggestion/answer. Thank you.
Hi, I am facing an issue with duplicate records when using incremental load on a table that receives data via a table insert from another table in the same data area. The incremental load selection rule is based on DW_TimeStamp, and there are three PK in the target table. The issue is that the primary key does not seem to be enforced in the target table, as I am getting duplicate rows in the table. I tested this with another table where data is inserted through a table insert (but without incremental load), and I still see duplicate values for the PK fields and despite setting the table option to "Use instance setting (error)", no error is thrown when duplicates occur. I have tested this behavior in TDI 6935.1 and 6926.1, and the issue persists. My questions are: How do primary keys work with table inserts? Should they prevent duplicates, or do they not function as expected when using table inserts? Can incremental load be achieved using table inserts? If so, what are the necessary con
I’ve an Azure storage account with a blob Container that has folders and files. How can I configure a data source to use it?
Hi, our version is 20.10.52.64. Noticed the the Cdata connectors are disappeared for new connections, so want to upgrade to newest version. According to me that is 20.10.63.64 if I'm correct. Are there things to take into account when upgrading to this version? Can I just run the upgrade script from my version to this newest version? Also, is this new version a good version if i eventually want to upgrade to TimeXtender Saas in the future? I've read Download TimeXtender 20.10 – TimeXtender Support, so i know from the "backlog issue” Just want to get confirmed i'm doing the right steps, things, version upgrade :-) thanks!
The Get Schema function is not working for the REST endpoints / table flattening. I am trying to use version: 8.0.0.0. I get the following message: Any ideas what is happening?
3 likes
1 like
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.