Recently active
Hello, We have our data warehouse setup on TXD Portal.Our data source is with provider Azure Data Factory - SQL Server and the data loaded in our Ingest instances is in Azure data lake storage. Then the data is processed in our prepare instances also in Azure SQL database. The question is now from the data source we have some columns being encrypted via AES_ENCRYPT pySpark function with 16bit key. Is there a possibility to decrypt the data anywhere from our end in ingest instances or prepare instances. For example to add a customized script somewhere (preferably in pySpark as well using AES_DECRYPT function) ? Otherwise, is there encryption method that can be applied from the data source and by that we can decrypt more easily in TXD in SQL Server? Looking forward to your suggestion/answer. Thank you.
Hi, I am facing an issue with duplicate records when using incremental load on a table that receives data via a table insert from another table in the same data area. The incremental load selection rule is based on DW_TimeStamp, and there are three PK in the target table. The issue is that the primary key does not seem to be enforced in the target table, as I am getting duplicate rows in the table. I tested this with another table where data is inserted through a table insert (but without incremental load), and I still see duplicate values for the PK fields and despite setting the table option to "Use instance setting (error)", no error is thrown when duplicates occur. I have tested this behavior in TDI 6935.1 and 6926.1, and the issue persists. My questions are: How do primary keys work with table inserts? Should they prevent duplicates, or do they not function as expected when using table inserts? Can incremental load be achieved using table inserts? If so, what are the necessary con
Hi, We are currently experiencing a problem with using multiple BC (NAV) Adapters at the same time. The two connectors are set up with individual app registrations as one is in the same Azure Tenant as where we are kicking off the execution, and one is in a different tenant all together. When I execute these tables one at a time or one source at a time, it runs smoothly with no issues. The issue is trying to run both sources at the same time. When I try that, it gives the following error: The connectors are set up correctly as I am able to run them seperately. Does anyone have a fix or workaround for this issue? For the record: we are on Version 20.10.44.64 and we are using Business Units.
Hi, our version is 20.10.52.64. Noticed the the Cdata connectors are disappeared for new connections, so want to upgrade to newest version. According to me that is 20.10.63.64 if I'm correct. Are there things to take into account when upgrading to this version? Can I just run the upgrade script from my version to this newest version? Also, is this new version a good version if i eventually want to upgrade to TimeXtender Saas in the future? I've read Download TimeXtender 20.10 – TimeXtender Support, so i know from the "backlog issue” Just want to get confirmed i'm doing the right steps, things, version upgrade :-) thanks!
The Get Schema function is not working for the REST endpoints / table flattening. I am trying to use version: 8.0.0.0. I get the following message: Any ideas what is happening?
1 like
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.