TimeXtender Desktop Q&A
Ask questions and find answers about the TimeXtender Desktop Application
Hi,I am working on a column that evaluates a date against GetDate() and returns a 1 or 0. The table can be loaded incrementally. I assumed that if I turned on "Keep field values up-to-date", this column would always be recalculated at loadtime, but it seems that this is not true.Is there a way to make these two work together or do I have to give up on incremental loading?Thanks
Good DayI have twice completed a differential deployment of a TimeXtender project and a Valid Table that is not listed in the review tasks is still being re-created on deployment.Is there functionality setup in TimeXtender that would re-deploy a table on deployment if it was listed as a deployed object dependency?Kerry
Good DayI have 2 TimeXtender Projects and in Project A I have an Execution Package that once completed, runs an External Executable to start an Execution Package in Project B.In Project B, I have an Execution Package that is defined to not allow any other concurrent packages, including the one called from Project A, when it is running.I am finding that this settings is being ignored if the Execution Package for Project B is being called form Project A.Has anyone come across cross-project Concurrent Package settings being ignored and how to overcome this?Kerry
Hi team, In the business units and data warehouses, deletions in an incremental table are identified by system field IsTombstone = 1. You can use this field in a data selection rule to filter out deletions between the business unit and DSA. This field is not present in ODX Server tables with incremental and delete handling enabled. Is there a way to identify records that have been deleted from incremental tables in ODX Server? And if not, what is the purpose of enabling deletion handling in ODX Server? Kind regards,Andrew - E-mergo
I have made a new app registration in Azure for TimeXtender. Provide the rights exactlly as the old app registration but the secret is expired.Error in TX : AADSTS7000222: The provided client secret keys for app 'OLD_APP_ID' are expired.I filled in the datasource and global enviorment the NEW_APP_ID but the error still is there. Restarted the ODX server, removed the pipelines but still same issue. How can i force that it will use the new application ID becaused I changed it on the places it is used. (Also with the new secret ofcourse)
Hi! I am experiencing an issue with ADO.NET transfer times on incrementally loaded tables where the ADO.NET transfer takes the same amount of time regardless of the amount of new rows coming into the table. A full load of the table containing about 45 million rows takes about 25 minutes, on the next incremental load the load time is still the same with ADO.NET taking up around 24 of these minutes. Our current data flow is as follows: SQL server → TimeXtender → Azure elastic pool (this is where all of our TimeXtender databases resides) Full load - 45 million rows Incremental load - 11 thousand new rows in the _R table The amount of new rows coming in to the table Has anyone experienced a similar issue? My best guess is that the problem resides in the azure elastic pool where the ado.net transfer is being throttled. Even if I have 0 rows in the _R table the ADO.NET transfer time is the same. Thank you!
Hi community, I’m using version V6221.1. I’d like to filter all tables in the data source on column ‘dataAreaId’. I try to set this up using the following settings: Do I really need to set this filter on every table I load from this data source? Or is this a bug?
Hey,I face this issue quite frequently and have not found out how to solve it yet so I’m hoping someone on here can tell me how to do this.I have a REST data source in which I make a selection of columns. I drag that table into the DSA layer, rename some columns and voila, I have my table. Now I found out that I need some additional columns from the data source so I go into my ODX and in the table selection I include those columns as well. The transfer is succesful and I have my data with additional columns. Now here comes my problem.I want to sync my table with the data source and add those additional columns to my existing table. When I right-click-drag the table into the DSA layer I can sync, smart sync and sync only with existing fields. The first options adds all columns to the bottom of my table (including the ones I already have). The other two options only sync the columns that I already have…Hope that someone can help me out as I face this more often in development.
hi,each time we do a deployment from DEV-PROD, the Txt scheduler is not working anymore. We use an older version of Txt (188.8.131.52) but still, the scheduler uses a wake up step that will d&e 1 table (retries=3, retries per step=3 and retry delay in min=5). If the step is successful, the next step will be executed. But since we deployed from DEV-PROD the scheduler is not starting anymore. Have removed the Wakeup step and created a new Wakeup have reboot the PROD machine (azure) but still no result. Can you pls tell me what needs to be done, it is urgent,thanks!!Mrt
I am using ODX version 6143.1. When loading a table from a source system, incremental load on a datetime column (with an offset of 7 days) works fine. But when I set up the incremental load rule with Detect deletes and Detect updates selected, the first incremental load (which is the initial load) runs successful, but a second load (which should be the increment) gives me a 404 HTTP error:Response status code does not indicate success: 404 (The specified path does not exist.). Edit: problem only arises when setting the Delete detection option. When I remove this checkbox, the 404 error disappears (so the Detect primary key updates option does work correct).
Dear comunity and support,This morning my Job scheduling broke and it does not seem to go online anymore.I'm running the new 6221 version and I have three issues:I have an ‘invalid’ job. I've reset the services and re-added the instances to the Execution server but nothing seems to fix it. At first I was not able to add Data warehouse execution packages to a job, but then I read that you can't schedule ODX and Data warehouse executions in one job. So i stopped the ODX Server service on windows and was then able to add the jobs and run them. This was going fine until I showed my collegues. How can I fix the job? I cannot add Data warehouse executions to my Job. Most of the time I cannot see the packages. I find the packages when I deselect the ‘Hide objects that can't be added’, but I cannot add them. During a brief time I was able to add them, when i stoped the ODX Server Service, but this must have been a fluke as I cannot imagine that I have to stop services to get a certain resu
I am facing a problem when creating a job in the new TX version. I have two instances called ‘Ontwikkel’ (dev) and ‘Productie’ (production).I have copied my dev instance to production and now I want to set up a job to execute the production instance and several SSL's in production.I have created an execution package with all the MDW tables that need to be executed and this package was copied to the production instance. I also have several SSL's that point to the production instance as source.Now, when I add a new job (on the production environment), all looks well. I add the package and the SSL's and schedule the job:After that, then I re-open the Job and suddenly all selected execution objects have changed to the DEV (ONTW) version of the same objects: Am I doing something wrong or is this a (pretty nasty) bug?
I’m regularly running into the “Cannot open server 'sql-instances-prod' requested by the login. Client with IP address '184.108.40.206' is not allowed to access the server.” error message, especially during overnight executions by the execution scheduler.But I can’t replicate the connection issue, i.e. it seems to be intermittent:Has anybody else encountered this issue and do you know what would be causing it / what the workaround might be?
Dear Community,i'm working with timeXtender 20.10.22 and my source is a Business Unit with the NAV (Business Central) adaptor.I've switched in my project between the source being Production (BC) to Acceptance (BC).Now im getting this error and I cannot do any thing anymore. Also switching back does not work. Can anyone help me? Thanks! tableAccount is not contained.Parameter name: tableAccountDetails:tableAccount is not contained. ...Module: timeXtenderSystem.ArgumentException at TimeXtender.DataManager.TableUsageMatrix.GetTableUsage(Table_NAV tableAccount, Account account) at TimeXtender.DataManager.Adapter_NAV.GetExpectedDeployedObjectsPrivate(Table_NAV tableToDelpoy, Guid projectId, Guid tableId, List`1 sqlObjects, ProviderDestinationSql providerDestinationSql) at TimeXtender.DataManager.Adapter_NAV.GetExpectedDeployedObjects(IDataAdapterTable table, Guid projectId, Guid tableId, List`1 sqlObjects) at TimeXtender.DataManager.StepTableSimpleDeploy.GetExpectedDeployedObjects(L
The scheduler stopped working in production. Yesterday I restarted but it still not running.How can I fix this issue?I have gone through the below document.Scheduled Execution issues - Did it not start, did it fail, or is my execution still running? – TimeXtender SupportOn my computer the recovery options were disabled.
I want my transfer to MDW job to run only when my ODX transfer task has completed without any errors. What happens sometimes now is that for some reason there is an error in the extraction and some tables are empty, they then get pushed to the MDW and our reports break because the tables are empty. I see that you can use instance variables but I don’t see that option on my ODX. How can I set this up?
Hi TX Community!I get data from an external-database everyday.Since we have an aim of incremental-loading the data that we extract, we are doing some query-tables from the datasource to create a “incremental-load”-key.In that matter we are experiencing two issues:1- TX cannot read the date-formats that are extracted from the database. This is the format that we get out: Of course, we can right-click on the field and edit the datatype, but that we would have to do everytime we syncronize the datasource because everytime we syncronize the datasource all the date-fields are back to the “unknown”-format and therefore we would have to right-click on each field and edit the data-type. We’ve tried to use the “Data type overrides” but it doesnt seem that we can convert from an “unknown”-format. How can we solve this problem? As mentioned, the tables are query tables and therefore we would like to think that the date-formatting could be solved with a CAST or a CONVERT function. Any ideas?2- In
I am setting up TimeXtender to extact tables from SAP with Theobald. I have a selection of tables that I can succesfuly extract when I add just one table to a transfer task. However, when I add all relevant tables to a single transfer task I run into errors. The extraction seems to be succesful on the Theobald side but moving the data to the ODX storage gives the following error “Error while copying content to a stream”. I can figure out why the tranfer is working when I do it per table and gives an error when I do multiple tables.. Full error sample below: Executing table rest_mara_generalarticledata:failed with error:System.AggregateException: One or more errors occurred. ---> System.Net.Http.HttpRequestException: Error while copying content to a stream. ---> System.ObjectDisposedException: Cannot access a closed Stream. at System.IO.__Error.StreamIsClosed() at System.IO.MemoryStream.get_Position() at System.Net.Http.StreamToStreamCopy.StartAsync() --- End of inner exce
Hi Support, We are experiencing an issue with the prioritization in our execution package. Our trip table is updated with an update script using data from the number_per_trip table.However, the trip table is loaded before the numbers_per_trip table. This results in missing new data in the trip table as the number_per_trip table has not yet been loaded before the script action is executed. We would like to change the loading order of these tables and have tried to do this by adding prioritization. This has no effect on the load whatsoever. I cannot find out what the problem is.Are there any settings blocking the prioritization feature or are we using the feature in the wrong way?See settings below.
When loading regularly (every hour) using incremental load, the ODX seems to “lose” transactions from source. Next run of the incremental load, ODX does not catch these missing transactions. The work around is to full load data, but the this takes long time. And it renders the incremental load useless as it is not to be trusted.We are running 20.10.34. Does any of the newer 20 versions have a fix to this?Regards Mads
HelloI currently have a column with the following file name in one of my tables. C:\Users\John\OneDrive - Sales Solutions\Desktop\TimeXtender\TMX-DataSamples\RLZ\BESTERS - BESTERS POINT - BESTERS IND_DEC 22_101602_0.txtIs there a way to remove the path name in the column to only have the file name?I have a file for each month and have merged the files so all the data is in one table. For example, remove this piece: C:\Users\John\OneDrive - Sales Solutions\Desktop\TimeXtender\TMX-DataSamples\RLZ\and only show this piece:BESTERS - BESTERS POINT - BESTERS IND_DEC 22 as the file name.I also need to have the Month Year (DEC 22) of each file copied into a new column called “Date”. Is this at all possible?Thank you
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.