General Questions about TimeXtender Data Sources
- 31 Topics
- 130 Replies
Hi Everyone,---------BackgroundMy organization is trying to build a Supervisory Org tracking database. Our previous HRM system did this natively, and enough departments are tired of waiting for the new system quirks to be ironed out so I’m tasked with a work around. I can’t connect TX directly to the new system, but HR can download a csv that includes employee ID, Employee Name, Supervisor ID, Supervisor Name, Job Title, Work Location and various less critical fields.My plan then was to have an HR rep download the csv file once a week to a specific network folder, point TX at it with the Multiple Text File data source. I’d then use the DSA to perform some basic transformations and enable history on the DSA table. In theory this gives me a weekly update to an employee, tracking changes to key things like job title, supervisor, etc. We have a little over 1200 employees in the report, and 60-70k records a year shouldn’t be a problem. The initial ingest works well. TX reads the file, moves
In the old version of TX (20.10) it is possible to set up an external SQL connection in a business unit under Sources, so that tables from an external database can be used in a TX project without extracting them from mentioned database. TX will then deploy those external tables as views pointing directly to the source database, which can even be on another server (‘linked server’).I cannot find external SQL connections in the latest version of TX. Can anyone confirm if this feature is available or not, and do we know if this in the product roadmap?
Hi I am using TimeXtender 188.8.131.52 incl. ODX server. I am trying to get some data from XML files with a wildcard which have the filename as a reference to another file. I am using the Cdata XML connector. I managed to get the data from both files in TimeXtender but can’t find a way to include the original file name to the table/query. Is there a standard option for this or maybe a workaround? Hope someone can help me.
(Using Timextender 184.108.40.206)I have 2 date fields on a table. I want to load the row if one OR the other fields have changed in the data source. How do I do that?I tried doing this (see under) but that caused duplicates because data had already been loaded (when only the loggedat field was the incremental field) If I do this (see under) will it only load if both of the fields have changed?
Hi everybody,Our client uses a SOAP API from a supplier. While TX supports SOAP the extraction requires multiple steps and string manipulation. The complete process is:For each table in the list of tables: Call the first SOAP endpoint to retrieve the names of the fields in the table Call the second SOAP endpoint to retrieve a page of data for the table Split the data into individual rows by splitting the string on the ^ character Split the rows into individual colums by splitting the string on the | character Fetch until the number of rows returned is less than the page size Move to next table We created a custom data provider for the TX custom data connector. This works fine in TX 20.10, but is not supported in 6221, so we need to create an alternative. I’m aware that you can use RSD files for more advanced logic, but as far as I can see this doesn’t include an option to perform the string manipulation (splitting). For the time being, we’ve ported the logic to a Python script
Hi all,We are trying to load CSV files into the ODX using the CSV data source. However we are getting errors on fields it is trying to interpret as int (housenumber), but some are filled in as 12-18 and therefor failing. We have tried to use the override data type option, but to no avail. What can we do about this?Kind regards,Maarten
Good DayI am setting up a new Data Source in TimeXtender. This dataset is very large and would like to eventually have this running as an incremental, but I have the task of importing the last 5 years of data first.I have tried to set this up with an Incremental Table setup in TX, but I keep getting a timeout setting. Iam wondering if I could set this table up as Simple for the historical load and then swap to Incremental but am worried my data would be cleared on changing the table type. Does anyone have any experience with loading in data with a similar scenario?Kerry
Hi all,I have a customer at which the data & analytics team has to report which data (tables & fields) is loaded into the data lake with ODX server for a certain source. This is because the source contains GDPR data and the data & analytics team has to prove to the legal team that they aren’t loading GDPR data into the data lake and further upstream.Unfortunately the TX documentation feature documents from the DSA and onward. The DSA documentation isn’t feasible for this request since we’re renaming fields and adding transformations. Is there a workaround to get this overview for a ODX server connection?Kind regards, RogierTimeXtender version: 20.10.39
I am looking for a way to find all tables in a perspective in a way I can analyse the differences between perspectives. In SQL in the metadata repository I would like to join ProjectPerspectives to the Datatables to find all Perspectives with certain tables, find which tables are not in a perspective and are therefor missed while loading data etc. I do not see an identifier thats Unique between the two so I am hoping one of you does :). Thanks in advance, Remco
We were able to get this CSV file data source stored in an Azure Data Lake Gen2 storage container to work, but thought it may be useful to share, as we found that that the setup worked best when we were careful not to enter any other settings, but instead just entered those settings that were needed and described below. Source File: CSV.Storage Location: Azure Data Lake Gen2 storage containerAuthentication Method: App Registration added to the storage container via a role assignment of "Storage Blob Data Contributor". Started by adding a new data source and chose the following:On the data source details page, entered the information for the 9 items outlined below, being careful not to enter information in any other boxes and just updating the necessary items. Item #1 will already be set to CSV based on the initial selection above, so that does not need to be updated. The following numbers correspond to the red boxes in the screenshot below. Provider Name: This should be CSV based on
We have a project perspective called 'Reference' which contains ODX tables. When we are executing it through the execution package it is throwing an error but when we try executing manually it is executing successfully.the file format which we are extracting is XLSX. can you please help us to resolve this issue.
In version 20.10.39 you sneaked in a new features in the BC Adapter data source: 17627: Business Central adapter - Merge extension tables option for SQL Server provider Added option to merge a table and its extension tables together as one table. This feature could be very useful but I can’t find any documentation on how it works and if there are any limits or considerations that need to be taken into account.Initial testing suggest that it works for some tables but maybe not for tables in extension modules?
Hi,Is it possible to connect to several databases that are on the same SQL server with one data source connection?We have tried to connect to multiple databases on the same SQL Server using the provider “TimeXtender SQL Data Source 220.127.116.11”.We tried leaving the Database parameter blank, putting in *, and writing a comma-separated list of databases, but neither way works. Has anyone had any success doing this with this connector or another?We’re using version 6221.1.Thanks!
I have an Excel sheet with percentages (11 decimals). The decimal sign is comma. See example below.Year Period CostPercentage2020 1 0,018001882020 2 0,016852519But when I read the data in TX then I see the below result: I have tried to use the culture feature see below but no success. Find a screen dump belowWhy does TX / Cdata not read the data as is? The “,” is interpreted as a Thousand separator.Any help will be appreciated.When I have a file with a “.” (dot) as a decimal sign I convert the data to numeric precision 38,scale 11 and that works.
Hello, I am currently trying to integrate Exact data using the TX Exact Online connector. I need to integrate data from a bunch of different divisions; I was wondering if the only way to do so was to individually add each division as a new data source or if there was another way ?
Hello everyone,After the odx transfer task for API, when I click on the preview, I get the “invalid column name” error. This warning also occurs while executing in the data warehouse and stops execute. What should I do? I wanted to use aliases but I have 575 columns.I tried to re-deploy the table with differential deployment disabled. Version: 6143.1 My column name example is below: it is very long‘properties_parameter_WS10M_RANGE_202109’ ThanksBest Regards
Dear community, Reach out to this community due to an issue with a JSON datasource connector.The situation:TimeXtender on premise (v20.10.40) using a Cdata ADO.net Provider for JSON 2021 (21.0.8011.0) in the Business Unit layer of the project. The connector uses a general account-password access for this datasource.User 1 logs on to the server TimeXtender is running on, does read data for this connector and got all the information expected.User 2 logs on to the server TimeXtender is running on, does read data and get just some information but not even related to this source. No error.Both users use the same connector with that general account to connect to this JSON datasource, but with different results.I asked for logfiles with verbosity = 3 of both users and attached are anonymized files. I am not sure how to interpret the differences in the logs.The log with the wrong results starts with a time out but playing around with this kind of setting does not change the outcome. Also some
Hi community,One of our customers want to report on NACE Code (see: https://nacev2.com/en). We are trying to find a (free/open) data source to map customers to the correct NACE Code.We do have the Dutch KVK/registration number and the EU VAT Number. Using one of these identifiers, we want to map the correct NACE Code.Does anyone have experience with this?
Hi Support,Our client is using ETL Agile v20. They want to connect to Infor's datalake. Infor provide us with their Compass JDBC driver. Is it possible to import a JDBC driver into Jet to connect to their datalake? Are JDBC's supported with ETL Agile or will this be possible at all in a future release? Many thanks,Kashif
Hi all,When using the latest version of the CData Excel Online connector for extracting files from Sharepoint, we get a rather useless message after hitting the button ‘Authorize OAuth’. We are using the OAuth Grant Type ‘CODE’, meaning that we are only going to see the files that the account we are authenticating with can see on Sharepoint. The message we get after authenticating is like this: The message does not tell us what permissions we are missing, the account does have access to Sharepoint. However, it does not have access to the Graph Explorer. What kind of permissions do we need when authenticating this provider with a user account? Does it purely depend on the user account or does the App Registration require permissions as well? If so, which ones?
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.