We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.
This data source can be used to connect to text or CSV files in many locations. It is also able to find files across folders and subfolders. It can then decide to merge these into one or more specified tables. If your setup requires a specific setup to work there also is various methods available to handle this. Configuration manual General notes on setting comma-separated parameters Connection settings Path Include sub-folders Included file types File aggregation pattern: Location specific fields Location Local file or folder Azure Blob Storage settings AWS S3 Bucket settings SharePoint or OneDrive settings Google Cloud Storage settings SFTP settings Culture setup Culture and client culture Delimiter Line ending Quote character Ignore quotes Header setup Has header record Include empty headers Skip top Skip comment rows Comment start character(s) Ignore brackets around column names Incomplete row handling Ignore blank rows Ignore incomplete rows Trim spaces Empty fields equal to null
Is it possible to add the dynamic values used in a nested call to a table flattening.In this particular case the first call gets the list of documents and feed the id as a dynamic value to the 2nd call. The response doesn't include any reference to the document and without the id we cannot link the details to a document. I've tried to add it as a static node (name: Document_ID value {id})but that only results in errors.
I am fetching refresh logs from power bi using the following endpoint: https://api.powerbi.com/v1.0/myorg /groups/{groupid}/datasets/{id}/refreshes (https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/get-refresh-history-in-group#refresh)In my use case, the groupid is hard coded and I iterate over the datasetid {id} using dynamic values FromEndpointTable. The setup for this is the same as in this guide: My problem is that the dataset id is not included in the payload for the refresh log endpoint, so I have no way of knowing which power bi dataset has been refreshed. Could I somehow include the dynamic endpoint as a field in the result set? Or is there any other way to solve this? Best regards, Pontus B
As a company admin, you can edit your basic company information in the Portal. This information will be visible to your TimeXtender partner, who can also edit the information on your behalf. Apart from basic contact information, the Metadata storage region option is available. The region you choose here controls where the metadata for new instances will physically be located. The setting when your account was created controls where you general metadata (the “meta instance”) will be stored. To edit your information In the top menu, go to Admin > Basic Info and then click Edit
Hi, I have two questions regarding Orchestration, schedule groups. (1) There is the option to make a schedule based on a SQL custom script. When I try to test this functionality out with the following statement: SELECT CAST('2025-06-10 01:00:00' AS DATETIME) It gives me the following pop-up: Does someone know why I don't get query result = '10-06-2025 01:00:00’? The SQL statement produces a valid datetime value in SMSS. (2) I need to schedule a job that runs four times per day for a project. I wanted to build this schedule with the custom SQL option and by using a calendar table in our database from the Prepare Instance. Is this possible? Thanks in advance!
Hi, I'd like to make a new data source connection using REST to connect with our Azure storage table. Thus far I have managed to get the connection setup, however pagination remains a issue. When testing our connection and following the steps in the debug logging file I see that when a table has a next: "x-ms-continuation-NextPartitionKey” and "x-ms-continuation-NextRowKey” it will return this in the header (not the body). But if the table doesn't have this it will stop since the "x-ms-continuation-NextPartitionKey” header was not found. As I am applying these pagination parameters as query parameter I need to be able to solve this issue. Is there maybe a way to apply a default value to these variables (manipulating query doesn't work as the variable itself already won't be found) or some way to dynamically replace a URL? Example table with no NextPartition/rowkey: When variable value replaced with default: Thanks for any input someone can provide. Kind regards, Robbert
Spring has turned to summer and we’re celebrating with a new release of TimeXtender Data Integration (desktop v. 7017.1). When you open the desktop application, you'll notice a new refreshed look, but we've also implemented a ton of improvements under the hood. See all the news below. New Refreshed desktop UI We've refreshed the design of the desktop UI with refined theme colors, and a new, but less prominent, blue accent color. As another UI improvement, we've streamlined the names and order of the show/hide options - 'show data types', 'highlight descriptions', etc. - in the View menu. We've also saved many users a few regular clicks by enabling them all by default. Choose where in the world your metadata is stored You can now choose a metadata storage region for your organization that specifies where in the world your new instances will be created. Current options are West Europe (default), Central US & South East Asia. Choose the region closest to you for the best TDI experience. I
Today, we’ve released updated data source providers. See the changes below. CSV Version: 23.4.3.0 (TDI) / 1.1.4 (20.10 BU) / 16.4.5.0 (20.10 ODX) Fixed a bug where connecting to Azure Blob storage did not work. Fixed a bug where Skip Top would not apply to all aggregated files. Exact Online Version: 10.0.0.0 + 9.5.0.0 (TDI) Added support for certificates. Added support for setting a culture when interpreting data types. Added support for global table flattening. Changed override headers behavior. It will now not remove all headers, it will instead replace the headers that are defined in the list. To remove a header, add it with empty value. Fixed a bug where running in parallel could produce duplicate headers for authentication. Excel Version: 23.5.0.0 (TDI) / 1.1.3 (20.10 BU) / 16.4.5.0 (20.10 ODX) Improved logging when reading files, making it easier to track down problematic files. Fixed a bug where connecting to Azure Blob storage did not work. Fixed a bug where having a . in a fol
Hi! This week I have been trying to upgrade a client of ours to TX 20.10.66.64 and in this process we have come quite far. We are able to deploy & execute all layers in a project, except for the Semantic layer. The error message TX shows us is below: The account that is mentioned in the parts that I have blurred out is an account that has never existed, nor is mentioned in any connection in the project. The databases used in this semantic layer are all on a SQL server that worked correctly before the upgrade of TX. Note: TX was not only upgraded, but also installed on a different application server. Created a new endpoint in the existing semantic layer, when trying to deploy and execute this, same error message appeared. Deploy steps all succeeded, it fails on the execution step. Has someone seen this error message or experienced this before?
If you have a scheduled execution, you sometimes want to know, is it still running. Below are some ways to check if this is the case or not. Also there is a list of places to search for error messages. What is currently running on my system So you notice that a scheduled job is not done in the time you expected, or did not complete in the nightly run. Alternatively you just want to know if there is anything that are currently being executed by the scheduler service. First I will show how to see what is running and what account is running it. In the below picture I am running a project, I am logged in to the dev environment as TestBruger1 and you can see a timextender.exe running. So when you have a scheduled execution running it will look similar, the only difference is that it runs as the scheduler user. Below I am still logged in as TestBruger1 on the dev environment, but you can see that the scheduled execution is running as TestBruger3. Also you might think that if you are logged i
Hi, Is there a way to access an ODX that went offline after upgrading TX? Or would you need to install the ODX version that is equal to the new TX version?
Hello, TimeXtender 20.10.51 I’m looking for support with an issue we’re experiencing where execution packages are running but no tasks are being completed while causing 100% usage on the repository. After deploying to production, we ran a package manually and noticed it not proceed past the above step. The “current tasks completed” stayed at 0, and no data was being loaded, even after waiting up to 30 minutes. We tried running it multiple times, but nothing changed. Running individual tables worked fine. We checked the SQL database usage for the DSA to be sure, and it wasn't being affected—no data was loading as expected. While troubleshooting, we found that the repository (a standard 100 DTU Azure SQL database) was hitting 100% data IO and DTU usage. One query, in particular, was using nearly all the available resources. It was this query: (@ObjectId uniqueidentifier)SELECT [StepId], AVG(DATEDIFF(s, [Start], [End])) AS [AvgSeconds] FROM [dbo].[ExecutionPackageLogDetails] WHERE [Object
I'm using dynamic values function in the TX REST connector 9.1.0.0. I use id's from another endpoint to loop through in my second endpoint path. This works well when I use “From Endpoint Table” but now I want to add a filter to only get the id's with a flag “hasresponse=true”. I've read the page but I still get an error with my Endpoint query. “No such table” the error message says. I've tried several things like adding a schema. But all with the same response. Is there something wrong with my syntax? Error:
Hi,what is the best (performing) way to extract data from SAP datasphere?I read about an API connection (OData), but is this the ‘optimal’ way to do this ? https://community.sap.com/t5/technology-q-a/how-to-export-data-from-sap-datasphere-or-its-database-sap-hana-cloud-to/qaq-p/13708728 https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html Best Regards,Peter
This is a follow-up of Using XML to ingest data, that i have managed to solve. I need some help with creating a nested statement. The first rsd which lists out all the IDs is this: <api:script xmlns:api="http://apiscript.com/ns?v1" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <!-- See Column Definitions to specify column behavior and use XPaths to extract column values from XML. --> <api:info title="contract" desc="Generated schema file." xmlns:other="http://apiscript.com/ns?v1"> <attr name="contractid" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@contractid" /> <attr name="description" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@description" /> </api:info> <api:set attr="DataModel" value="RELATIONAL" /> <api:set attr="URI" value="https://my.soap.endpoint/service.asmx?WSDL" /> <api:set attr="PushAttributes" value="true" /> <api:set attr="EnablePaging" value="true" /> <api:set attr="Header:Name#" value="SOAPAction"
Hi, We are trying to connect to several CSV files stored in a local folder. While we can successfully synchronize the data source and perform a full load in the ODX, we encounter an error when attempting to add the table to our data area (DSA). The issue lies in the path to the Parquet file stored in Azure. The correct path should be: CSV_DNB/csv_*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet However, the path Timextender is looking for is: CSV_DNB/csv_^*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet It seems that Timextender is misinterpreting the automatically generated name and adds a ^ character. I also attempted to use a specific file aggregation pattern, such as; H100.*.csv (all files in folder have the prefix H100 followed by a random number). However, I encountered the same error. Is there a way to specify the name of the table generated in the ODX? It seems like the “File aggregation pattern” is the issue. Do you have any idea how to fix this? -Execute E
10483 points
3884 points
2768 points
2732 points
1955 points
Learn about troubleshooting techniques
Find a Partner that fits your needs!
Submit a ticket to our Support Team
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.