Because Time Matters
New to TimeXtender? Get Started now!
Start discussions, ask questions, get answers
Read product guides and how-to's
Submit ideas and suggestions to our team
Read the latest news from our product team
Online training and certification
Connect with like-minded professionals
Explore the new cloud-enabled version of TimeXtender.
Hi,I am working on a column that evaluates a date against GetDate() and returns a 1 or 0. The table can be loaded incrementally. I assumed that if I turned on "Keep field values up-to-date", this column would always be recalculated at loadtime, but it seems that this is not true.Is there a way to make these two work together or do I have to give up on incremental loading?Thanks
Hey,I face this issue quite frequently and have not found out how to solve it yet so I’m hoping someone on here can tell me how to do this.I have a REST data source in which I make a selection of columns. I drag that table into the DSA layer, rename some columns and voila, I have my table. Now I found out that I need some additional columns from the data source so I go into my ODX and in the table selection I include those columns as well. The transfer is succesful and I have my data with additional columns. Now here comes my problem.I want to sync my table with the data source and add those additional columns to my existing table. When I right-click-drag the table into the DSA layer I can sync, smart sync and sync only with existing fields. The first options adds all columns to the bottom of my table (including the ones I already have). The other two options only sync the columns that I already have…Hope that someone can help me out as I face this more often in development.
Hi! I am experiencing an issue with ADO.NET transfer times on incrementally loaded tables where the ADO.NET transfer takes the same amount of time regardless of the amount of new rows coming into the table. A full load of the table containing about 45 million rows takes about 25 minutes, on the next incremental load the load time is still the same with ADO.NET taking up around 24 of these minutes. Our current data flow is as follows: SQL server → TimeXtender → Azure elastic pool (this is where all of our TimeXtender databases resides) Full load - 45 million rows Incremental load - 11 thousand new rows in the _R table The amount of new rows coming in to the table Has anyone experienced a similar issue? My best guess is that the problem resides in the azure elastic pool where the ado.net transfer is being throttled. Even if I have 0 rows in the _R table the ADO.NET transfer time is the same. Thank you!
I try to upgrade version 6221.1 for customer Dorel.eu . I have the license number. But it requires me to log onto one of your servers to get authorisation . I cannot do that with this user (not a dorel user). What should I do ?
We’re facing issues with CData REST API connector when requesting an API for a large dataset. TX sends the request, but seems to never receive a response. The execution fails when it hits the timeout limit (currently set to 3600 seconds).In the logs (verbosity 4) I see that nothing happens between the time that the request was sent and the moment of timeout. I put the same URI, headers and parameters in Azure Data Factory. After 9 minutes it receives a response of 204MB and 176K rows. In Postman, I received a response after 8 minutes. While I agree that it might be better to somehow make smaller API request. In fact, when I limit the date range to only this year, TX gets a response in about a minute. However, I still expect TX/CData to finish the request when the dataset is larger and it takes more time before the response is generated by the server.Due to NDAs I cannot post logs, RSDs or credentials here, but I’ll send some additional files via email.Tx version 20.10.39CData REST API
hi,each time we do a deployment from DEV-PROD, the Txt scheduler is not working anymore. We use an older version of Txt (20.10.6.64) but still, the scheduler uses a wake up step that will d&e 1 table (retries=3, retries per step=3 and retry delay in min=5). If the step is successful, the next step will be executed. But since we deployed from DEV-PROD the scheduler is not starting anymore. Have removed the Wakeup step and created a new Wakeup have reboot the PROD machine (azure) but still no result. Can you pls tell me what needs to be done, it is urgent,thanks!!Mrt
I am using ODX version 6143.1. When loading a table from a source system, incremental load on a datetime column (with an offset of 7 days) works fine. But when I set up the incremental load rule with Detect deletes and Detect updates selected, the first incremental load (which is the initial load) runs successful, but a second load (which should be the increment) gives me a 404 HTTP error:Response status code does not indicate success: 404 (The specified path does not exist.). Edit: problem only arises when setting the Delete detection option. When I remove this checkbox, the 404 error disappears (so the Detect primary key updates option does work correct).
Dear comunity and support,This morning my Job scheduling broke and it does not seem to go online anymore.I'm running the new 6221 version and I have three issues:I have an ‘invalid’ job. I've reset the services and re-added the instances to the Execution server but nothing seems to fix it. At first I was not able to add Data warehouse execution packages to a job, but then I read that you can't schedule ODX and Data warehouse executions in one job. So i stopped the ODX Server service on windows and was then able to add the jobs and run them. This was going fine until I showed my collegues. How can I fix the job? I cannot add Data warehouse executions to my Job. Most of the time I cannot see the packages. I find the packages when I deselect the ‘Hide objects that can't be added’, but I cannot add them. During a brief time I was able to add them, when i stoped the ODX Server Service, but this must have been a fluke as I cannot imagine that I have to stop services to get a certain resu
Hi community, I’m using version V6221.1. I’d like to filter all tables in the data source on column ‘dataAreaId’. I try to set this up using the following settings: Do I really need to set this filter on every table I load from this data source? Or is this a bug?
Hi, Is it possible to setup retry steps for ODX transfer tasks in the >6024.1 version of TimeXtender? If yes, where? If not, please add this as an idea. Greets,Devin
We have an API in which we have to pass a POST request with a request body to obtain the data. In Qlik Sense and Postman we have a basic connection established. However, there you can easily specify that the call should be a POST and you can also easily fetch the request body. In the CDATA REST connector in TX, we do not know where to fetch this request body in the connection. The demo endpoint we are trying to connect is:https://api.mews-demo.com/api/connector/v1/configuration/getwith the request body:{"ClientToken": "E0D439EE522F44368DC78E1BFB03710C-D24FB11DBE31D4621C4817E028D9E1D","AccessToken": "7059D2C25BF64EA681ACAB3A00B859CC-D91BFF2B1E3047A3E0DEC1D57BE1382","Client": "NameOfYourCompanyOrApplication"}Any help would be appreciated.
Hi team, TimeXtender allows adding parameters from a different table to a custom field in a semantic data model (Qlik). The resulting syntax/qlik script combination is always broken.When using adding a custom field parameter from a different table, TimeXtender fully qualifies the Qlik syntax regardless of the settings. The resulting syntax on the Qlik side will no longer match the syntax in the views created by TimeXtender:Qualified setting:Fully qualified setting: The resulting Qlik Script:"Sales_Targets":LOAD"KPI", "Target", "DIM_Boekdatum.DayName" AS "Test";SQL SELECT"KPI", "Target"FROM "Test"."dbo"."Test QVD_SLQV";But the view has the following syntax:CREATE VIEW [dbo].[Test QVD_SLQV]-- Copyright 2011 timeXtender a/s-- All rights reserved---- This code is made available exclusively as an integral part of-- timeXtender. You may not make any other use of it and-- you may not redistribute it without the written permission of-- timeXtender a/s.ASSELECT [KPI] AS [KPI] ,[Target] AS [Targ
Hello everyone, I hope someone can help me with this.I am working on getting Azure AD Groups via the Graph API and then retrieving the members through a nested API call.I saw a post concerning a nested REST API call together with pagination in which one of the commenters (Gijs) had the exact same API which I am also using (commented by Gijs):https://support.timextender.com/rsd-file-customization-96/using-a-nested-rest-api-in-combination-with-paging-858However, the suggestion didn't work for my use-case as it only ‘expanded’ the nested values, but didn't paginate. I only got the members of the first 100 groups.I used the RSD-file from the commenter Gijs with the addition of two points:two additional rows to enable pagination; [memberout.userPrincipalName| allownull()] instead of [memberout.userPrincipalName] as I was getting a [500] error with this specific attribute.I've made them bold in the code down below.Would someone be able to give me some pointers to what I'm missing in the RSD
Hi,I am running a legacy version of TimeXtender (version 20.10.40.64), where we have an Excel Online connection set up using CData ADO.NET Provider for Microsoft Excel Online 2022 (22.0.8389.0) as data source.This data source is configured to authenticate via Azure ‘client flow’, see picture. The app registration in Azure has the following permissions: The data source works properly most of the time. It is able to list the worksheets it finds on the SharePoint site, and can fetch data. However, scheduled execution packages sometimes fails with the following error message:[500] Could not execute the specified command: Error while listing workbooks for drive: [generalException] General exception while processing. Details: Error while listing workbooks for drive: [generalException] General exception while processing. Module: System.Data.CData.ExcelOnline fx220l.yg at fx220l.IIu.m(Boolean ) at fx220l.IIu.X(tbp`1 , Boolean ) at fx220l.IIu.YI(LoL )
Hi,I am trying to parse a response using the REST connector from an API which returns a JSONROWS-format where the first row contains the name of the columns, while the following rows contain the values.The response is on the form:{ "response":[ [ {"name":"test1"}, {"name":"test2"} ], [ 1, 2 ], [ 3, 4 ], [ 5, 6 ] ]}I have tried several different xPath-configurations. The following gives the correct columns, but returns zero rows:xPath: column:/response;columnname:/response.name;row:/responseI was hoping that at least this would have parsed all the values as strings with the first row containing the {“name”:”testN”} in the corresponding columns.How do I work with this response?
Hi,We are using the Excel Online connector which is authenticated with a service account and uses delegated permissions to access Excel files on Sharepoint. The idea is that all relevant files will be shared with this account and then loaded into our DWH. The required files are visible on Sharepoint when siging in with the user:But for some reason, they are not when using the Excel Online connector (with option Show shared documents = ‘True’). I know that the connector uses the /SharedWithMe (OneDrive) call to fetch shared items, since that is what I inferred from the logging.This call indeed retrieves no results through the Graph Explorer, but the files are visible through another, similar, call on the Graph Explorer (the one from ‘Insights’): Why are files visible on Sharepoint but not on OneDrive? Is there a way to work around this? I have seen use cases where they actually are visible on both Sharepoint and OneDrive and the connector is working properly.
Learn about troubleshooting techniques
Find a Partner that fits your needs!
Submit a ticket to our Support Team
Already have an account? Login
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.