TimeXtender Desktop Q&A
Ask questions and find answers about the TimeXtender Desktop Application
We have a SFTP source on which CSV files are stored and which can be authenticated to with a ‘public key’. When using the CData SFTP-connector in ODX Server, we are able to connect with the following settings: The most important setting here is the SSH Auth mode, where we specify that we are using PublicKey authentication. Since I am not familiar with this type of connector, I would not now how to retrieve the data from the CSV files (the only table we get from the source now is the ‘Root’ table that contains the FileNames that can be found in the folder).I would rather use the CData CSV-connector, but that leaves me with a problem when testing the connection: I cannot specify that I want to use the PublicKey authentication mode. That is, it is specified under ‘SSH Auth Mode’ but it is not possible to select under ‘Auth Scheme’: As a result, we keep getting the same error on testing the connection (see attached file). I have tried setting the Auth Scheme to ‘None’, ‘Auto’, ‘SFTP’ or ‘B
Hi,We are receiving some weird error messages when executing our project, which uses Azure Data Factory to copy tables from our Data Lake storage to the Azure SQL db (ODX → DWH). The error can be found in the attachments. When inspecting the pipeline run on ADF, it seems to be missing the configured activities (see second attachment). When executing the table manually from TX, the pipeline runs succesfully. Also the pipeline seems correctly configured on ADF and the pipeline run that is created when executing the table manually shows the correct activities.Any ideas as to what might be causing this? Best regards,Kaj
We are seeing a situation where the next scheduled execution of a package does not start and the event is logged that this is because the previous execution is still running. However, when we closely look at finish times for these executions, the actual finish time is on average 8 minutes before the start of the next execution. I've been told that there can be a delay of a couple of minutes between the task itself actually finishing and the moment the process that executed the task is ended as well. However, 8 minutes and sometimes even up to 13 minutes seems too long.Anybody any ideas on why this occurs and what could be done about it? We're currently running TX version 20.10.25.
Hi,The current Data warehouse has tables with two different DB schema's. There are already PowerBI Dash-boards using these schema's. Now I want to use the Semantic layer to copy the data (Data Export) to a SQL serverless DB. Everything works fine but I want to override the schema name. In the Semantic layer I have not find any options to change this. Is that correct or is there a way like in the Dataware house layer to change the schema?
TX: 20.10.39SQL Server: 2019Endpoint settings:Windows authentication hostname matching case https connection over the default proxy I am trying to push a Qlik Sense Enterprise endpoint from TimeXtender and having trouble getting the deploy successfully running. Testing connection to Qlik Sense is successful, when I Deploy I get an error when TX tries to create a new Data Connection. I see this issue pop up sometimes in other deployments and can usually overcome by manually making a Data Connection to the MDW database and giving it the same name TX would use: MDW_SLQV . In this case, I get the same error when TX tries to modify the connection.TimeXtender does create the app (and is owner of it) and can successfully reload the app, it just doesn't insert the script. If I copy-paste the TX Qlik Script into the app it works fine. As neither TX nor Qlik actually log what is going on, I cannot see what about the connection string is causing the error.
v184.108.40.206Hi,I have a project that is loading data from the ledger table into the staging database with approx, 5m records.When manually executing the load, the ADO.NET transfer is taking 5-6 minutes, but the cleansing rules can take hours, to the point I have to terminate the load.Is there any troubleshooting I can perform on the cleansing steps? I know I have taken the easy option on joins where the data types differ, and wonder if that is now compounding the issue?Any tips or pointers greatly appreciated.Thanks,Richard
Hi,I would like to do a “range lookup”. Because I can't solve it using a conditional lookup, I used a custom field transformation. My code looks like this:CASE WHEN substring([SBIcode_cleansed], 1, 1) like '[^0-9]' THEN (SELECT [kvk_groep_omschrijving] FROM [dsa].[SBIcode_groepering] where [SBIcode] = [SBIcode_cleansed] ) --'test1' ELSE (SELECT [kvk_groep_omschrijving] FROM [dsa].[SBIcode_groepering] where substring([SBIcode_cleansed], 1, 2) >= [min_sbi_code] and substring([SBIcode_cleansed], 1, 2) <= [max_sbi_code] )--'test2' END It seems that a subquery is not possible because I get an error message that more than one value is retrieved using the subquery but that is not possible.Testing it in SSMS works fine. (Not using between because above seems to work faster when testing
If you add a new field with the deploy only option, to an existing table with data, there are two diferent behaviors, depending if the table has incremental rule defined or not.This is controled by this setting: By default it’s cleared for “normal” tables and setted to incremental tables.I think the behavior should always be the same or at least get a warning message
I’ve detected that if you have the same table in DSA and MDW. let say DSA.Invoices and MDW.Invoices for example, and you define partition for that, both tables share the same partition schemas and partition functions in the SQL Server database.It’s a problem when you try to add or remove field from whatever table.
I have created a CSV source in Timextender NG ODX. The data explorer shows the data type correctly for my table.However, when I drag the table to the DSA, all fields appear as bigint type. (by the way, I have setQuote Character to “ and Row Scan Depth to 0). Any help on how to resolve this issue, would be appreciated.
Hi community,I’ve a question about the ‘PK’ table that is created in the ODX when you enabled the ‘Handle primary key updates’ and ‘Handle primary key deletes’.I’ve noticed the ODX database (SQL Database) is growing rapidly in disk space after enable an incremental load schedule to reload data every 5 minutes. When I used the default SQL ‘Disk usage by top tables’ I noticed the PK tables are the biggest in terms of disk space.My PK table of the GL Entry table contains 4.5 billion rows! And is 95 GB. While my DATA table contains only 118 million records.When I query the PK table and filter the primay key in this table on 1 value. This value is saved 75 times in this table. For every odx_batch_number (odx_batchnumber 0 – 74). Is this normal? I think it is a bit strange that my PK tables are the biggest tables in the ODX in terms of disk space.Even when I run the storage management task. It doesn’t clean the PK table’s. It always contains the primary key for every odx batch load.The cust
I discovered that when using the replace transformation with the CharType that you are required to know or look up the int value of the character you want to replace and use that as the parameter.This does not seem to be documented anywhere or be useful in any way I can think of. I assume REPLACE('abc’, char(65), 'b’) is functionally equivalent to REPLACE('abc’, 'a’, 'b’). In a Case Insensitive collation there is no difference between the two anyhow.There is also no COLLATION or binary type support, which are things that you can do with REPLACE() that might actually be useful in certain rare cases.
Hi!We have had our installation for a while now but as far as I’m aware the application has the latest updates. As we are becoming more people in the project working at the same time I have installed the client on both a new server and on my own laptop. The problem is that when I try to connect to the ODX service running on one of our “old” servers I keep getting the error as in the attached pictures. Why is this? As you can see the application has the same version on both thte “old” and the “new” server.The correct ports are opened between the servers at least so that should not be an issue but I get the same error when trying to connect from my laptop. Anyone who has an input on this?
I’ve a query table created to my D365 F&O data source like you can see in this screenshot: The query returns data as you can see in the query tool (it would be nice to be able to run it in the “Manage Query Tables” window too): But, If try to Synchronize ODX Objects: I get an error and there is not the table if I try to remap it: And when I deploy and execute the table, it’s empty: Is this a bug or I’m doing something wrong?
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.