Ask questions of about setting up data sources
Recently active
Hello,i have the following issue:i would like to save my tables in the staging area so i can store them (because some of my data in the datasource will be deleted soon).Futhermore in the future maybe new fields will be added to those tables in the datasource and then i would be forced to deploy my tables in the staging area.Question : is there a way to keep the old values from that tables and only add the new field to that table without truncating everything with the deployment in the staging area?ty in advanceFEN
Issue: tables with paging stop loading while data is still incompleteWe have an execution package with loads data from AFAS GetConnectors. There are 7 Business Units in TX, each loading data from a separate AFAS implementation. Per implementation AFAS supports up to 20 parallel connections. From this perspective the Max. Threads 10 should not be a problem.AFAS uses paging with record offset for tables that are too big to load. All Business Units have a similar set of tables. I’ll use AfasIncluzio as an example, as this one has the most data.This table is currently the only one loaded with pagingTX_Financiele_MutatiesWhen we look at the execution log on package level, we can see that the Total Time varies per day: The AFAS API is quite stable in terms of speed. Hence I can tell that all executions under 15 minutes did not load all data. Further investigation learned that this only happens on table Tx_Financiele_Mutaties, which is currently the only one with paging.This is the Executio
Yes, I’ve read the other posts about this topic But I'm rather unlucky getting it to work. Each time I press the ‘Test Connection' it fires a GET method at the URI, which results in the infamous ‘The requested resource does not support http method 'GET'’ message. RSD file:==================================<api:script xmlns:api="http://apiscript.com/ns?v1" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <!-- See Column Definitions to specify column behavior and use XPaths to extract column values from JSON. --> <api:info title="Table1" desc="Generated schema file." xmlns:other="http://apiscript.com/ns?v1"> <!-- You can modify the name, type, and column size here. --> <attr name="_RowNumber" xs:type="string" readonly="false" other:xPath="/json/_RowNumber"/> <attr name="Assignee" xs:type="string" readonly="false" other:xPath="/json/Assignee"/> <attr name="Date" xs:type="date" readonly="false" other:xPath="/json/Date"/&g
In the old version of TX (20.10) it is possible to set up an external SQL connection in a business unit under Sources, so that tables from an external database can be used in a TX project without extracting them from mentioned database. TX will then deploy those external tables as views pointing directly to the source database, which can even be on another server (‘linked server’).I cannot find external SQL connections in the latest version of TX. Can anyone confirm if this feature is available or not, and do we know if this in the product roadmap?
Hi I am using TimeXtender 20.10.37.64 incl. ODX server. I am trying to get some data from XML files with a wildcard which have the filename as a reference to another file. I am using the Cdata XML connector. I managed to get the data from both files in TimeXtender but can’t find a way to include the original file name to the table/query. Is there a standard option for this or maybe a workaround? Hope someone can help me.
Hey community,I am trying to set up pagination for my REST connector to ADP. I need to use the CData parameters $skip & $top.My first call should be: https://api.eu.adp.com/hr/v2/workers?$skip=0&$top=100Then I need to increment the $skip parameter by 100 so the next call would be: https://api.eu.adp.com/hr/v2/workers?$skip=100&$top=100In need to end when I receive an empty response. Anyone that has experience with this?
Hello,for one of our customers we are using the “Odbc Data Provider” so that we can a local ODBC with a specific driver called "Progress OpendeEdge” (Source: ProFashionALL). We need to add Additional Connection Properties for this datasource. In the older TX versions (< v20) we could add these Additional Properties. This is missing in the new TX version so we can't connect to this source at the moment. This is the main source and crucial in this project. Anyone ran into this issue or has a workaround?There is a workaround but this must done at the source where they store the credentials in the “_user table". I hoped there would be a solution at our (TX) side. Hope to hear.
(Using Timextender 20.10.35.64)I have 2 date fields on a table. I want to load the row if one OR the other fields have changed in the data source. How do I do that?I tried doing this (see under) but that caused duplicates because data had already been loaded (when only the loggedat field was the incremental field) If I do this (see under) will it only load if both of the fields have changed?
Hi everybody,Our client uses a SOAP API from a supplier. While TX supports SOAP the extraction requires multiple steps and string manipulation. The complete process is:For each table in the list of tables: Call the first SOAP endpoint to retrieve the names of the fields in the table Call the second SOAP endpoint to retrieve a page of data for the table Split the data into individual rows by splitting the string on the ^ character Split the rows into individual colums by splitting the string on the | character Fetch until the number of rows returned is less than the page size Move to next table We created a custom data provider for the TX custom data connector. This works fine in TX 20.10, but is not supported in 6221, so we need to create an alternative. I’m aware that you can use RSD files for more advanced logic, but as far as I can see this doesn’t include an option to perform the string manipulation (splitting). For the time being, we’ve ported the logic to a Python script
Hi all,We are trying to load CSV files into the ODX using the CSV data source. However we are getting errors on fields it is trying to interpret as int (housenumber), but some are filled in as 12-18 and therefor failing. We have tried to use the override data type option, but to no avail. What can we do about this?Kind regards,Maarten
Good DayI am setting up a new Data Source in TimeXtender. This dataset is very large and would like to eventually have this running as an incremental, but I have the task of importing the last 5 years of data first.I have tried to set this up with an Incremental Table setup in TX, but I keep getting a timeout setting. Iam wondering if I could set this table up as Simple for the historical load and then swap to Incremental but am worried my data would be cleared on changing the table type. Does anyone have any experience with loading in data with a similar scenario?Kerry
Hi,My client wants to connect to the App Store Connect API (https://developer.apple.com/documentation/appstoreconnectapi) but we are having trouble with getting the authorization to work.Calls to the API require JSON Web Tokens (JWT) for authorization, specifically using the “ES256” signature algorithm. We tried using the CData Rest connector but found in the documentation that “...the JWT signature algorithm cannot be set directly. Only the RS256 algorithm is supported” https://cdn.cdata.com/help/DWH/ado/pg_oauth.htm#jwtHas anyone experience connecting to the App Store Connect API?
Hi FriendsI have used TX CDS to connect SQL Server because It was a best practice recommendation, but now I don’t know if CDS will be deprecated, What is the best connector for SQL Server? CDS will be deprecated? RegardsIgnacio
We are struggling with nested calls in a POST API with custom body. We don't change the URI on every input, but need to change the body on every request:We use Cursor based pagination using rows@next for pagination. And as it seems you can only use one pagination per rsd, we can only get one iteration using nested calls.Query slicers only seem to be able to slice URI's (https://cdn.cdata.com/help/DWG/jdbc/pg_queryslicer.htm). In our example, we are extracting ReservationId's from a Reservation request and need them as a body input for ReservationItems in batches of 1000.Any help would be appreciated, let me know if you need anymore information.
We’re facing issues with CData REST API connector when requesting an API for a large dataset. TX sends the request, but seems to never receive a response. The execution fails when it hits the timeout limit (currently set to 3600 seconds).In the logs (verbosity 4) I see that nothing happens between the time that the request was sent and the moment of timeout. I put the same URI, headers and parameters in Azure Data Factory. After 9 minutes it receives a response of 204MB and 176K rows. In Postman, I received a response after 8 minutes. While I agree that it might be better to somehow make smaller API request. In fact, when I limit the date range to only this year, TX gets a response in about a minute. However, I still expect TX/CData to finish the request when the dataset is larger and it takes more time before the response is generated by the server.Due to NDAs I cannot post logs, RSDs or credentials here, but I’ll send some additional files via email.Tx version 20.10.39CData REST API
We have an API in which we have to pass a POST request with a request body to obtain the data. In Qlik Sense and Postman we have a basic connection established. However, there you can easily specify that the call should be a POST and you can also easily fetch the request body. In the CDATA REST connector in TX, we do not know where to fetch this request body in the connection. The demo endpoint we are trying to connect is:https://api.mews-demo.com/api/connector/v1/configuration/getwith the request body:{"ClientToken": "E0D439EE522F44368DC78E1BFB03710C-D24FB11DBE31D4621C4817E028D9E1D","AccessToken": "7059D2C25BF64EA681ACAB3A00B859CC-D91BFF2B1E3047A3E0DEC1D57BE1382","Client": "NameOfYourCompanyOrApplication"}Any help would be appreciated.
Hi all,I have a customer at which the data & analytics team has to report which data (tables & fields) is loaded into the data lake with ODX server for a certain source. This is because the source contains GDPR data and the data & analytics team has to prove to the legal team that they aren’t loading GDPR data into the data lake and further upstream.Unfortunately the TX documentation feature documents from the DSA and onward. The DSA documentation isn’t feasible for this request since we’re renaming fields and adding transformations. Is there a workaround to get this overview for a ODX server connection?Kind regards, RogierTimeXtender version: 20.10.39
Hi,I have problem with pagination and recursive queryI have rest api endpoint:https://api.procountor.com/api/invoiceswhich returns invoice headers as"id": 6591273,"partnerId": 1208831,"type": "PURCHASE_INVOICE","status": "PAID","invoiceNumber": 16...etc.pagination works fine, I get every invoice headers (thousands) when pagination is set like this in RSD-file:<api:set attr="EnablePaging" value="true"/><api:set attr="pagenumberparam" value="page" /><api:set attr="pagesizeparam" value="size" /><api:set attr="pagesize" value="100" />But the actual details of invoice comes from endpoint like this:https://api.procountor.com/api/invoices/6591273So, I have to iterate thru one by one every invoice id that I get from https://api.procountor.com/api/invoices I have followed instructions from herehttps://legacysupport.timextender.com/hc/en-us/articles/360052383191-Creating-and-using-RSD-files-for-CData-providers#one-nested-calland I can query invoices successfully, but the
Hi,I have a problem with data catching from JSON file. The file looks like this:And I would like to get values from data field but only for en_US locale.I was trying to add filter in RSD file like below but it did not help:Can that type of filtering be used there?Greets, Aleksei
Hello,To implement pagination in the API endpoint "https://domainname/api/v2/search/tickets?updated_since=2023-01-01T02:00:00Z&page{}", you need to provide the page parameter in a nested format. I prepared a custom RSD file specific to this API after reading the "Nested parameter" article. I have created a loop starting from 1 in the given code. However, I'm not able to see any data. I'm not receiving any errors during the transfer and synchronization processes. You can find the RSD file attached.I would appreciate in resolving this issue.Best Regards
I am looking for a way to find all tables in a perspective in a way I can analyse the differences between perspectives. In SQL in the metadata repository I would like to join ProjectPerspectives to the Datatables to find all Perspectives with certain tables, find which tables are not in a perspective and are therefor missed while loading data etc. I do not see an identifier thats Unique between the two so I am hoping one of you does :). Thanks in advance, Remco
Hi TimeXtenderI have an urgent request (!)I am trying to set up a Script action (which I will later use in a table as a post-script) that executes a POST CURL in a different tool.I have tried to execute the POST CURL in Postman and it is working.Now I want to do the same from TimeXtender.I have an URL and a Token.How can I set it up? Looking forward to hearing from you.Ismail
We were able to get this CSV file data source stored in an Azure Data Lake Gen2 storage container to work, but thought it may be useful to share, as we found that that the setup worked best when we were careful not to enter any other settings, but instead just entered those settings that were needed and described below. Source File: CSV.Storage Location: Azure Data Lake Gen2 storage containerAuthentication Method: App Registration added to the storage container via a role assignment of "Storage Blob Data Contributor". Started by adding a new data source and chose the following:On the data source details page, entered the information for the 9 items outlined below, being careful not to enter information in any other boxes and just updating the necessary items. Item #1 will already be set to CSV based on the initial selection above, so that does not need to be updated. The following numbers correspond to the red boxes in the screenshot below. Provider Name: This should be CSV based on
Hello,I am trying to extract data from azure AD. I am at the point where I can get data using the microsoft Graph API, and have it loaded into Timextender.The issue I am facing has to do with pagination. Microsoft graph API returns an url containing information of the next 'page’ of data. According to documentation I should adjust my RSD file as follows: <api:set attr="DataModel" value="DOCUMENT" /> <api:set attr="URI" value="https://graph.microsoft.com/v1.0/groups" /> <api:set attr="EnablePaging" value="true" /> <api:set attr="pageurlpath" value="/json/@odata.nextLink" /> <api:set attr="RepeatElement" value="/json/value/" />This does not however loop over the pages, but still only gives me a top 100 of groups. I have a feeling maybe I need to escape either the @ or the . in the pageurlpath. Or am I missing something more obvious? Kind regards,Rutger