TimeXtender Desktop Q&A
Ask questions and find answers about the TimeXtender Desktop Application
Dear Sir,For one of our customers we need to connect with TX to a Datalake (CSI Cloud).Our customer is able to connect with PowerBI to their Datalake with a JDBC → ODBC Bridge piece of software from ZappySys JDBC Bridge Driver. This works fine.Now we need to connect with TimeXtender without this third party JDBC Bridge Driver software to save our client some ZappySys license money.So I checked with the CData component in TimeXtender, but it seems their is no JDBC → ODBC Bridge Driver available...Question: Is their a way to do this with the best Data Management Platform in the world ?Hope you can shine a light on this.Thanx in advance. Regards from Amsterdam,ArthurFourpoints TimeXtender Partner
I have a table with a table Insert from a view (actually multiple views) underneath it. The problem is that this table sometimes is empty because the Data Lineage is wrong. It seems that we need to have Object Dependencies for Table Inserts. 2 Questions about that: When creating a "Table Insert" get a message that "Object Dependencies" are required? Is it enough to check the next child view (in my case) in the "Object Dependencies" if you have one table insert? Or should it be the first child table? Or should all underlying views/tables be included?
Dear Support,My customer is using AFAS. From this tool they load data into TimeXtender with an API.They have a wish to load the data from AFAS incrementally in the ODX.Is it possible to load incrementally from an API to the ODX server and what are your suggestions to start with? Best regards,Christian Koeken
Hi community, I have to design the following issue, every day, I have a new CVS file in the Azure storage account. These file’s name change with a new day, for exampleFormaExtract.net_new_commission_2023-03-01.csvFormaExtract.net_new_commission_2023-03-02.csvFormaExtract.net_new_commission_2023-03-03.csvHow I could create an automatic process to load a new file to the table in DSA? Now I will do it manually adding every day a new file an process the data.ODX DSA Another requirement is to create a new column SourceMainTable in the DSA table with the file’s name. RegardsIgnacio
We have a Dynamics 365 Finance and Operations system, where we have the Data-lake export running (for over 1 year). THe setup is similar to Josephs sketch solution for the following question:Dynamics 365 F&O Data Lake as a TX source | Community (timextender.com)We have it running quite stable on one of our boxes, but are struggling with stability on a pre-production box.The goal is to use Azure AD Integrated Authentication, and the user running the scheduler has been given access to the Serverless SQL Database (tested in SSMS)First we experienced some missing prerequisites when using Azure AD auth:Execute ODX d365fo_dl.ACOJournalTable_BR ADO.NET Transfer:Error:Failed -Execute ODX d365fo_dl.ACOJournalTable_BR ADO.NET Transfer 'Failed' Unable to load adalsql.dll (Authentication=ActiveDirectoryIntegrated). Error code: 0x2. For more information, see http://go.microsoft.com/fwlink/?LinkID=513072 Details: SQL Server: 'import-d365fo-data
Dear Reader,Dynamic Row Level Security (DRLS) can be implemented by defining a table that contains 1. values for the column to be secured; 2. defining the users email-addresses.With this specific case, two DRLS columns are defined.This is shown as depicted in the first picture below:The first column (_Retailer_Key).The second column to be secured is also dynamically defined (_CountryAccess_Key):This works, however suppose I have 4 retailer country combinationsCountry Retailer UK YourOwn NL YourOwn UK Theirs GE Unsere And the _CountryAccess_Key that was defined by DRLS was “UK” and the Retailer was YourOwn, then the Result is that the rows with the red font is returned. This is just (_CountryAccess_Key OR _Retailer_Key). However that’s not the result required. The result should be (_CountryAccess_Key AND _Retailer_Key) as depicted below.Now the result only returns one row (depicted in the red font) Country Retailer UK YourOwn NL YourOwn UK Theirs GE Unsere
In my data source a new field [geregistreerd] is added.After synchronizing the data source and a transfer, the field is available in the ODX Go to the MDWSelect the mapping of the table The field is not available in the Data Movement Add a new field in the MDW tableAdd a transformationThe field is available in the Data Fields Why is it not available in the Data Movement?
I have loaded a table with some initial data (== initial stoock). The key is quiet simple Company, Warehouse and ItemNow I want to update the amounts every day so that it reflects the actual stock level of data dayThe initial load was done from one source , the updates come from another sources History settings have natural jkey set , and all non lookup fields are updsated (type 1)Nevertheless the record is not updated Any idea ‘why’ the update is not working
Hi,I have created a semantic model in our development environment with a global database,In development the model is named Sales DEV and in production it is named Sales.When I make changes to the model and do a multiple environment transfer, the data in the production model is deleted and the users cannot use the model before I have executed the model in the production environment.I expected a creation of a offline model, which could be executed at a later time. Instead our users cannot access data in a period of time, which is unacceptable to our business.How can I avoid this, so our users do not experience down time on the production model? Kind regards,Rasmus Høholt
Hi,It’s possible to create new schemas to set rules for access. Are there other reasons, beyond that, for creating new schemas?What are the pros and cons of extra schemas in a Data Areas in version 21 (6xxx)?All knowledge and experience in the field is gratefully receivedBRAnders
Hello,In order to be able to answer all related questions regarding GDPR, we are designing a TX solution that allows us to track all tables/columns that contains sensitive information. Most importantly, we make use of tags so we can track the columns throughout the DWH.An additional question we got is if it is possible to keep track of changes in the TimeXtender projects. In other words, who added which column (with sensitive data) to the project and when? What is the best way to keep track of these things and is there a solution in TimeXtender available that we can use to provide this information? Probably we should use the data in the TX Repository, we were curious if you have had this question before.
Dear sir,In TX version 20 you can easily go back in time to an earlier version of your TX project.I am looking for going back in time to an earlier version of the MDW/SSL layer in TX version 21.I am running version TimeXtender 6143.1.Is this possible in the latest 6143.1 TX version ?Regards,Arthur
We using multiple semantic layers, and adding more in the future. All semantic layers will probably need a calendar dimension and some other dimensions are also shared between layers. Is there any 10x method to copying between semantic layers, as it seems quite ineffective to setup:sort bysummarize bycategoryn-times on the same table
TX: 6143.1 I have a TimeXtender SQL Data Source version 18.104.22.168 that I am trying to upgrade to 22.214.171.124. When I do Manage Data Sources in the ODX Server from the TimeXtender application it tells me there is an update available. When I apply this it tells me to ‘Edit the datasource’ to update the connection string. There does not seem to be an obvious way to do this. The User Portal also does not seem to have a way to update this.What is the process I should apply?
Hello,Current Setup: TimeXtender 126.96.36.199 and ODX 20.10.31.I need help setting up incremental load on a rather different type of data flow from the standard one, that has table inserts and views inbetween tables.The flow is as follows:Table A (has incremental load today) => View1 => TableB => Table C(MDW) once directly but also through View2.Both views are necessary so we cannot remove them. My question is how to setup incremental load in timextender on this solution, if it is even possible. I am not sure how to do it through a view. Currently there is only an incremental load set up on Table A and then we fully load table B and C. Thank you
Hi,I am using the ODX server to get data from the SQL server database.After executing the transfer task, I am getting the below error.Any idea how to solve this?“System.Data.SqlClient.SqlException (0x80131904): The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.System.Data.SqlClient.SqlException (0x80131904): The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.”
Hi Support,We are using an ADO Data Source connection to load data in the ODX into Azure.From there we map the ODX table to the next layer in TimeXtender.This seemed to work.But now we changed the source table with added and removed columns.We sync and transfer it in the ODX and when we use preview in the ODX we are seeing the new table as expected. But when we map it in TimeXtender again to the next layer, the changes are not added. It instead maps to what is used to be. It mappes fields to columns that no longer exist, and it does not map the new added columns.We tried clearing the files in Azure to start fresh but it still mappes to the older version that no longer exist. Why would it still mapp to the old table that no longer exist in the source and no longer exist in Azure?Why would it not mapp to the new table thats in Azure now?Is there some metadata in TimeXender that first has to be cleaned before changes can be mapped?Kind regards,Tamim
After attempting to add a new column to my history table, I cannot get the table deployed and executed anymore. The new column is currently not in the table, but I also can't get it to get back to "old" historical table back without the new column. Currently have tried the following things, but so far without result:Set the project version back to an earlier version where the change wasn’t made yet; Upgrarde the PROD SQL DB Size and deploy again.It does not get beyond the Valid Table structure when deploying the historical table. When previewing I also get the following error messages:The table you are trying to preview has not been deployed yet. Deploy first.Invalid column name 'SCD Type I Hash Key'. The table you are trying to preview has not been deployed yet. Deploy first. Invalid column name 'SCD Type I Hash Key'. Details: SQL Server: 'c5d03ed4f0ca.tr30953.westeurope1-a.worker.database.windows.net,11052'SQL Procedure: ''SQL Line Number: 1SQL Error Number: 207 Invalid column name
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.