TimeXtender Desktop Q&A
Ask questions and find answers about the TimeXtender Desktop Application
Hi Support, We are experiencing an issue with the prioritization in our execution package. Our trip table is updated with an update script using data from the number_per_trip table.However, the trip table is loaded before the numbers_per_trip table. This results in missing new data in the trip table as the number_per_trip table has not yet been loaded before the script action is executed. We would like to change the loading order of these tables and have tried to do this by adding prioritization. This has no effect on the load whatsoever. I cannot find out what the problem is.Are there any settings blocking the prioritization feature or are we using the feature in the wrong way?See settings below.
When loading regularly (every hour) using incremental load, the ODX seems to “lose” transactions from source. Next run of the incremental load, ODX does not catch these missing transactions. The work around is to full load data, but the this takes long time. And it renders the incremental load useless as it is not to be trusted.We are running 20.10.34. Does any of the newer 20 versions have a fix to this?Regards Mads
HelloI currently have a column with the following file name in one of my tables. C:\Users\John\OneDrive - Sales Solutions\Desktop\TimeXtender\TMX-DataSamples\RLZ\BESTERS - BESTERS POINT - BESTERS IND_DEC 22_101602_0.txtIs there a way to remove the path name in the column to only have the file name?I have a file for each month and have merged the files so all the data is in one table. For example, remove this piece: C:\Users\John\OneDrive - Sales Solutions\Desktop\TimeXtender\TMX-DataSamples\RLZ\and only show this piece:BESTERS - BESTERS POINT - BESTERS IND_DEC 22 as the file name.I also need to have the Month Year (DEC 22) of each file copied into a new column called “Date”. Is this at all possible?Thank you
How can I solve extension _R tables failing to create? I am working on the old project Now.This project was copied from another project repository.Environment: SandboxVersion: 22.214.171.124Data Source: Business unitProvider: SQL Server Data SourceThe connection with the Data source is good and synchronized well.When I am deploying the tables in ODX storage getting an Error.An error occurred during create a table. See exception details for the failing object: Create failed for Table 'BSA.BSA_dbo_Inspection_R'.An exception occurred while executing a Transact-SQL statement or batch.The specified schema name "BSA" either does not exist or you do not have permission to use it.
Did anyone try to connect azure sql database using Azure AD Integrated Authentication? I notice MFA was missing as an Auth type but AD Integrated was failing with error. Let me know if anyone resolved this error. One or more errors occurred.Could not discover endpoint for Integrate Windows Authentication. Check your ADFS settings. It should support Integrate Widows Authentication for WS-Trust 1.3 or WS-Trust 2005. Details: Could not discover endpoint for Integrate Windows Authentication. Check your ADFS settings. It should support Integrate Widows Authentication for WS-Trust 1.3 or WS-Trust 2005.
On clean procedures generated by TX, they seem to have a RECOMPILE embedded into the procedure itself as a general rule. I think this is to keep them fresh to other changes in a project that get introduced. But do we know the exact reason why, and is there a way to remove other that doing a customized change table by table?
Dear Support,The reload in TimeXtender is giving the error: An item with the same key has already been added.It seems to be that one column is mapped on two different columns in the same table.Is there a quick solution to find the column which is causing this error? Thanks in advance! Christian
I have 3 tables in my src data warehouse:src.SalesOrderItemsDelta, this table gets filled every day with a delta (changes today vs yesterday) of our order lines. Via src.SalesOrderItems, this table gets filled every Saturday night with all the order lines available. src.preFactSalesOrderItems, this table gets filled via a custom table insert with the following table insert: SELECT [SalesOrderItemID] ,[SapClient] ,[SalesOrderNumber] ,[SalesOrderItemNumber] ,[SalesOrganization] ,[DistributionChannel] ,[Division] ,[FaboryArticleNumber] ,[SoldToCustomerCode] ,[BinCode] ,[CreatedOnDate] ,[CreatedOnDateID] ,[ChangedOnDate] ,[ReasonForRejectionCode] ,[PromisedDeliveryDate] ,[PromisedDeliveryDateID] ,[CommunicatedDeliveryDate] ,[CommunicatedDeliveryDateID] ,[PlantCode] ,[SalesAmount] ,[SalesCurrency] ,[ItemCategoryCode] ,[OrderedQuantity] ,[BinQuantity] ,[SalesOrderCategoryCode] ,[SalesOfficeCode] ,[ConfirmedDeliveryDate] ,[ConfirmedDeliveryDateID] ,[CommittedDeliveryDate] ,[CommittedD
We have a setup with 3 environments (Dev/Test/Prod) running in the legacy version, with a BU Based ODX, DSA, MDW and Several SSLs. Based on a on-prem SQL Server setup. Fairly common I guess.We are multiple developers on a shared project, making changes daily to the project, and therefore needs some QA process. Our target is to have changes and new functionality running on Test for a week before transfering to production. In addition to out centralised BI org, we support the data needs of analysts in the different departments. For this we have established replica databases of ODX and MDW. The analysts can read these replicas without interfering with the centralised data processing (The primary reason for the replicas). Along with the data read access, we have a database the analysist have the rights create objects in (typically views and stored procedures). The analyst environment will only be exposed on our prod platform.We would like to provide a better SLA for our Analysts for new ta
I’ve created a query table that suddenly started to fail. When I press Validate everything is fine.I can even preview the table in the sourceI can also run the query on the source. But when I execute the table, with the same user as above I get this message: Here’s the query:SELECT t1.[TRANSACTIONCURRENCYAMOUNT] ,t1.[ACCOUNTINGCURRENCYAMOUNT] ,t1.[REPORTINGCURRENCYAMOUNT] ,t1.[QUANTITY] ,t1.[ALLOCATIONLEVEL] ,t1.[ISCORRECTION] ,t1.[ISCREDIT] ,t1.[TRANSACTIONCURRENCYCODE] ,t1.[PAYMENTREFERENCE] ,t1.[POSTINGTYPE] ,t1.[LEDGERDIMENSION] ,t1.[GENERALJOURNALENTRY] ,t1.[TEXT] ,t1.[REASONREF] ,t1.[PROJID_SA] ,t1.[PROJTABLEDATAAREAID] ,t1.[LEDGERACCOUNT] ,t1.[HISTORICALEXCHANGERATEDATE] ,t1.[CREATEDTRANSACTIONID] ,t1.[RECVERSION] ,t1.[PARTITION] ,t1.[RECID] ,t1.[MAINACCOUNT] ,t1.[MODIFIEDDATETIME] ,t1.[CREATEDDATETIME] ,t2.ACCOUNTINGDATE ,t2.DOCUMENTNUMBER
Hi all, I create a datatable from Json file with REST data source but got an error as below. ‘The bcp client received an invalid column length for column ID 1.’ Since the the column is zero based I look for the data in second column in my json file. I have ensured that the first and second columns are not read in the RSD file. However, the problem still persists. I believe the issue may be due to the data in this column being too large. This is API: https://archive-api.open-meteo.com/v1/archive?latitude=51.51&longitude=-0.13&start_date=2018-01-01&end_date=2023-04-15&models=best_match&daily=weathercode,temperature_2m_max,temperature_2m_min,temperature_2m_mean,apparent_temperature_max,apparent_temperature_min,sunrise,sunset,shortwave_radiation_sum,precipitation_sum,rain_sum,snowfall_sum,precipitation_hours,windspeed_10m_max,windgusts_10m_max,winddirection_10m_dominant,et0_fao_evapotranspiration&timezone=Europe%2FBerlin How can I solve this issue? Thank you
How can I conditionally fill down/flash fill NULL values with previous values based on certain criteria?
I would like to be able to flash fill down NULL values in my DSA table with certain conditions.In the table below I have multiple NULL values. Take for example the column ‘CardCode DUAL’.Row 2 with Company key MTW and project 1201121979 shows for CardCode DUAL DB0006. I would like to show value DB0006 also for all other rows where company key = MTW and project = 1201121969.Same for Route Bron column. I would like to fill down NULL values on the most recent NON Blank value for that Company_Key+Project combination.I think it should be possible with a self join or self select, but not sure how.
Hi community,we are facing the problem, that we create duplicates when we bring data from an ODX API source to the MDW. We are working with an overlapping sliding window of two days in the schema file (because data can change and there is no last modified date) and only set the primary key on the ODX source.In the MDW (dedicated SQL pool) we enabled the history and set the ID as natural key. All fields are marked as type 1 fields.The execution brings us duplicated ID values - no updates are madeThanks for your helpMichael
We are using Business unit to land our data in the DW. We have a few columns where we need to obfuscate some of the data.At what point should a script action be placed to update the field in the raw table prior to the data being moved into the valid table, so our script does not need to include an update to the field in both the raw and valid table. the script uses UPDATE table SET column
Hi,I obtain a deployment error when I try to make my table historic. Computed column 'SCD Surrogate Hash Key' in table ' <mytable>' cannot be persisted because the column is non-deterministic. I think this has to do with the excessive amount of columns in the table: 695 columns, all varchar(2000). What confuses me is that when I set the hashing algorithms to debug, I still get the same error I don't have any transformations on this table, its a straight copy from to ODX. Is there a limit on the amount of fields a History table can have?
Hello,We have a data source that contains several additional/stacked connections in our Business Unit. We now wanted to add the 4th additional connection (so in total 5 data connections), but TimeXtender crashes when clicking on OK after configuring the connection. I think it tries to connect to the data source and get meta data for the tables it needs to select but somehow this fails. Found this in the Event Viewer: The errors are as follows:.NET = System.OutOfMemoryExceptoiin Application Error = Faulting application path: C:\Program Files\TimeXtender\TimeXtender 126.96.36.199\timeXtender.exe Faulting module path: C:\Windows\System32\KERNELBASE.dll Application Error = same as 2.)We are on version 20.10.38, do you have any idea what might be causing this? Why don't we get a proper error message in TX interface?Connection to the data source through SSMS is working without any issues by the way.Best regards,Kaj
I’ve got the following user request but I am not sure what the most optimal solution is, as i can think of many possibilities.I’ve got 2 Facts Tables:DSA.Fact_Turnover DSA.Fact_TransportOrdersBoth Facts contain the ‘Project’ column. The user request is to finally get the Turnover for all projects which are in Fact_TransportOrder. What is the most optimal way to get a check column to see if the Project value of Fact_TransportOrder is also present in Fact_Turnover and why? I want to use this filter column to eventually only present him the Turnover for projects which are also present in the Fact_TransportOrder.I’m really curious, as I already have such solutions, but am not sure if there are more easy TimeXtender functions or tricks which i’m not using currently.
After connecting successfully to my SQL server and adding the tables & rows I am able to preview the data by Select Tables. The Synchronize task also completed successfully. However, when I execute the Transfer task it fails with 2 errors (see attachments). I can’t figure out the reason why it fails though but the error message suggests something that (The specified path does not exist). I tried to remove the File Container in Azure and sync+transfer again, but it does’nt make a difference.
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.