Ask questions about TimeXtender Data Warehouse Instances
- 139 Topics
- 531 Replies
While running trials of TimeXtender it is fairly common to run against SQL Server Developer Edition. As SQL Server 2022 has been GA since last November, this is the version people will generally be installing.Finding 2019 is fairly hard. I have noticed that the 20.10.37 ODX Server is perfectly happy, but that TimeXtender 20.10.39 gave me an “Unsupported version of SQL” error, even with databases running in SQL 2019 compatibility level. Is this supposed to be the case?Azure SQL DB is 2022 under the hood, even if SELECT @@VERSION states something else.
I'm following the steps explained on this page https://support.timextender.com/data-sources-112/connect-to-excel-files-with-excel-online-628#Connect+with+the+Excel+Online+CData+provider, but I couldnt get it. I used Microsoft SharePoint Excel Provider.I have a 48 Excel file. I uploaded them to my company's onedrive. I then gave OAuth to Cdata. But I took this error. ‘You must specify an Excel file: Set the FILE property to an .xlsx file.’ The file url address where the excel files are located like this:https://companyname-my.sharepoint.com/:f:/g/personal/xxx_companyname_onmicrosoft_com/xxxxxxxxxxxxxxxxxxxx Now I have written this above address in the 'URL' section under 'Authentication'. Is this true? Or should I specify in the 'Folder' and 'File' sections in the 'Connection' section? Which Auth Scheme should I choose?
I have a table with a table Insert from a view (actually multiple views) underneath it. The problem is that this table sometimes is empty because the Data Lineage is wrong. It seems that we need to have Object Dependencies for Table Inserts. 2 Questions about that: When creating a "Table Insert" get a message that "Object Dependencies" are required? Is it enough to check the next child view (in my case) in the "Object Dependencies" if you have one table insert? Or should it be the first child table? Or should all underlying views/tables be included?
In my data source a new field [geregistreerd] is added.After synchronizing the data source and a transfer, the field is available in the ODX Go to the MDWSelect the mapping of the table The field is not available in the Data Movement Add a new field in the MDW tableAdd a transformationThe field is available in the Data Fields Why is it not available in the Data Movement?
I have loaded a table with some initial data (== initial stoock). The key is quiet simple Company, Warehouse and ItemNow I want to update the amounts every day so that it reflects the actual stock level of data dayThe initial load was done from one source , the updates come from another sources History settings have natural jkey set , and all non lookup fields are updsated (type 1)Nevertheless the record is not updated Any idea ‘why’ the update is not working
Hi,It’s possible to create new schemas to set rules for access. Are there other reasons, beyond that, for creating new schemas?What are the pros and cons of extra schemas in a Data Areas in version 21 (6xxx)?All knowledge and experience in the field is gratefully receivedBRAnders
Hello,In order to be able to answer all related questions regarding GDPR, we are designing a TX solution that allows us to track all tables/columns that contains sensitive information. Most importantly, we make use of tags so we can track the columns throughout the DWH.An additional question we got is if it is possible to keep track of changes in the TimeXtender projects. In other words, who added which column (with sensitive data) to the project and when? What is the best way to keep track of these things and is there a solution in TimeXtender available that we can use to provide this information? Probably we should use the data in the TX Repository, we were curious if you have had this question before.
Dear sir,In TX version 20 you can easily go back in time to an earlier version of your TX project.I am looking for going back in time to an earlier version of the MDW/SSL layer in TX version 21.I am running version TimeXtender 6143.1.Is this possible in the latest 6143.1 TX version ?Regards,Arthur
Hello,Current Setup: TimeXtender 184.108.40.206 and ODX 20.10.31.I need help setting up incremental load on a rather different type of data flow from the standard one, that has table inserts and views inbetween tables.The flow is as follows:Table A (has incremental load today) => View1 => TableB => Table C(MDW) once directly but also through View2.Both views are necessary so we cannot remove them. My question is how to setup incremental load in timextender on this solution, if it is even possible. I am not sure how to do it through a view. Currently there is only an incremental load set up on Table A and then we fully load table B and C. Thank you
After attempting to add a new column to my history table, I cannot get the table deployed and executed anymore. The new column is currently not in the table, but I also can't get it to get back to "old" historical table back without the new column. Currently have tried the following things, but so far without result:Set the project version back to an earlier version where the change wasn’t made yet; Upgrarde the PROD SQL DB Size and deploy again.It does not get beyond the Valid Table structure when deploying the historical table. When previewing I also get the following error messages:The table you are trying to preview has not been deployed yet. Deploy first.Invalid column name 'SCD Type I Hash Key'. The table you are trying to preview has not been deployed yet. Deploy first. Invalid column name 'SCD Type I Hash Key'. Details: SQL Server: 'c5d03ed4f0ca.tr30953.westeurope1-a.worker.database.windows.net,11052'SQL Procedure: ''SQL Line Number: 1SQL Error Number: 207 Invalid column name
Hi,We are receiving some weird error messages when executing our project, which uses Azure Data Factory to copy tables from our Data Lake storage to the Azure SQL db (ODX → DWH). The error can be found in the attachments. When inspecting the pipeline run on ADF, it seems to be missing the configured activities (see second attachment). When executing the table manually from TX, the pipeline runs succesfully. Also the pipeline seems correctly configured on ADF and the pipeline run that is created when executing the table manually shows the correct activities.Any ideas as to what might be causing this? Best regards,Kaj
v220.127.116.11Hi,I have a project that is loading data from the ledger table into the staging database with approx, 5m records.When manually executing the load, the ADO.NET transfer is taking 5-6 minutes, but the cleansing rules can take hours, to the point I have to terminate the load.Is there any troubleshooting I can perform on the cleansing steps? I know I have taken the easy option on joins where the data types differ, and wonder if that is now compounding the issue?Any tips or pointers greatly appreciated.Thanks,Richard
Hi,I would like to do a “range lookup”. Because I can't solve it using a conditional lookup, I used a custom field transformation. My code looks like this:CASE WHEN substring([SBIcode_cleansed], 1, 1) like '[^0-9]' THEN (SELECT [kvk_groep_omschrijving] FROM [dsa].[SBIcode_groepering] where [SBIcode] = [SBIcode_cleansed] ) --'test1' ELSE (SELECT [kvk_groep_omschrijving] FROM [dsa].[SBIcode_groepering] where substring([SBIcode_cleansed], 1, 2) >= [min_sbi_code] and substring([SBIcode_cleansed], 1, 2) <= [max_sbi_code] )--'test2' END It seems that a subquery is not possible because I get an error message that more than one value is retrieved using the subquery but that is not possible.Testing it in SSMS works fine. (Not using between because above seems to work faster when testing
If you add a new field with the deploy only option, to an existing table with data, there are two diferent behaviors, depending if the table has incremental rule defined or not.This is controled by this setting: By default it’s cleared for “normal” tables and setted to incremental tables.I think the behavior should always be the same or at least get a warning message
I’ve detected that if you have the same table in DSA and MDW. let say DSA.Invoices and MDW.Invoices for example, and you define partition for that, both tables share the same partition schemas and partition functions in the SQL Server database.It’s a problem when you try to add or remove field from whatever table.
I discovered that when using the replace transformation with the CharType that you are required to know or look up the int value of the character you want to replace and use that as the parameter.This does not seem to be documented anywhere or be useful in any way I can think of. I assume REPLACE('abc’, char(65), 'b’) is functionally equivalent to REPLACE('abc’, 'a’, 'b’). In a Case Insensitive collation there is no difference between the two anyhow.There is also no COLLATION or binary type support, which are things that you can do with REPLACE() that might actually be useful in certain rare cases.
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.