We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.
I use the CSV provider to load a folder with multiple CSV files. I know one of the files has 17471 rows. Only the ingest task only loads the first 14521 rows. I don't have any filtering in the ingest task. Does anyone know how I can load the full file?
Today, we’ve published a minor release of the TDI Portal that contains the changes listed below. Improved When adding a Prepare instance from a blueprint, you can now map the Ingest instances at the same time. That saves you the additional work of remapping the instances the first time you open the Prepare instance in the TDI desktop application. Fixed It was possible to try to create firewall rules for deleted instances, though the attempt would fail. Deleting a firewall rule for a deleted instance would fail. Rules related to deleted instances are now read-only until they are automatically permanently deleted along with the instance. In the Table Builder on a REST data source, clicking Get Schema would result in a "not implemented" error.
Hi, We are currently experiencing a problem with using multiple BC (NAV) Adapters at the same time. The two connectors are set up with individual app registrations as one is in the same Azure Tenant as where we are kicking off the execution, and one is in a different tenant all together. When I execute these tables one at a time or one source at a time, it runs smoothly with no issues. The issue is trying to run both sources at the same time. When I try that, it gives the following error: The connectors are set up correctly as I am able to run them seperately. Does anyone have a fix or workaround for this issue? For the record: we are on Version 20.10.44.64 and we are using Business Units.
In this article, you will learn how to install a Power BI refresh package. To be able to do this, in the Azure Portal, you need to create an App registration which you will use to access our Power BI. This is done under the Azure Portal > Azure Active Directory > App Registration Create an App registration Open up the Azure Portal here Search 'App Registration' For this example, name the application 'Test' and click Register at the bottom of the page Click on Certificates & secrets Then, click on New client secret Add a description of your client secret Specify when this key should expire. Then click Add Add the following permissions On the next page, copy your secret value Open up Power BI In Power BI Go to your workspace in Power BI and in the upper right corner select Access. Then search for your app registration and click Add. Next, open up TimeXtender Orchestration and right-click Orchestration > Packages, select New > Power Bi Refresh, and then select a name for the Power Bi Refr
As stated in this announcement, we have decided to discontinue support for CData providers in TimeXtender, and have been busy developing a broad range of new and powerful TimeXtender data source providers to ensure a more flexible and reliable alternative to CData. Below is a table showing CData providers (including the latest version available in TimeXtender) alongside their recommended replacements. While our goal is to eventually replace all CData providers with TimeXtender enhanced data source providers, development is still ongoing. In the meantime, where a TimeXtender enhanced provider is not yet available, other alternatives are provided. For more information on how to change a data source provider, see this article. Provider Latest Version Alternatives Act! CRM 24.0.8963.0 TimeXtender REST Amazon Redshift 24.0.8963.0 Odbc Data Provider, OleDb Data Provider API 24.0.8963.0 TimeXtender REST Azure Active Directory 24.0.8963.0 OLE DB Provider for Microsoft Directory Services, TimeX
Hi, I’m trying to use the transformation Set replace value in the Table Builder. Described in tutorial like this: I tried a simple SQL string function (LEFT(NodeName,5). It looks proper in the XSLT Output: The execution worked fine. No errors. But both columns PatientNameHash and PatientName produces the same result in the Ingest table. The full name. Anyone who knows how to format a transformation in Set replace value?Is the content of the template “string-replace-all” available for viewing? BRAnders
Hi all! This morning, I retrieved data with the REST API of MoreApp.com. The API is quite straightforward. I encountered some challenges that I'd like to share with you. If there’s anything incorrect in this description, please let me know so I can adjust it. Preparation I am using the TX REST connector version 7.1.0.0.You will also need an API key, which must be created by the administrator in the app.The data I need to retrieve is located at the base URL: https://api.moreapp.com/api/v1.0. The specific data I am fetching is "submissions," or filled-out surveys. The POST API used to fetch this data is structured as follows: https://api.moreapp.com/api/v1.0/customers/{customerId}/forms/{formId}/submissions/filter/{page} Customer is fixed. FormId needs to be looked up. Page is required for pagination. Authentication is achieved by including the X-Api-Key in the header. Step 1 – Retrieve the correct FormIds You can do this via Postman or TimeXtender. It is a simple GET API: https://api.mo
It is possible to use OAuth 2.0 as the authentication method for the TimeXtender REST data source. One API that uses this is the Graph API. Content Content Prerequisites Application setup Access Token URL Scope Initial setup Set up OAuth Authentication Main Endpoints Users Groups Teams Set up pagination Dynamic endpoints Users messages Team members Prerequisites Using the postman collection explained in this guide Use Postman is a good start as the method is pretty much similar. What we will do is the application method aka Client Authentication. Application setup As mentioned above you need to use Application rights for client authentication, so the app you want to use for this must have the correct rights. The Delegated rights are easier to set as they mainly do not require Admin consent, that is not the case for most Application rights, so get these rights authenticated before starting. I got one app where I got all the application rights added If you want access to groups and users
Hi, I need to connect to Snowflake as a data source. I have downloaded the latest odbc version for windows from https://www.snowflake.com/en/developers/downloads/odbc/ and installed the provider. From XPilot I get an answer how to configure the odbc data source, but I cant find the “Private Key” field in no 4. That field seem to be “missing” in the config dialog. Step 3: Configure the ODBC Data Source with the Private Key Open the ODBC Data Source Administrator Application and go to the System DSN tab. Press "Add" and select SnowflakeDSIIDriver. Click Finish. In the Snowflake Configuration Dialog, provide a name for the data source. Enter the connection details: User: The Snowflake user name. Private Key: The path to the rsa_key.pem file. Database, Schema, Warehouse: Specify the database, schema, and warehouse. Tracing: Set to 0. Test the connection to ensure it is successful. Press OK. Any suggestion on this? Where should I to put the filepath? regards,Bjørn A.
To transfer any data from a data source to an Ingest instance's storage, the first order of business is naturally to establish what tables and columns are available in your data source and which of these tables you could be interested in transferring. The overall term for this process in TimeXtender Data Integration (TDI) is synchronizing metadata, and this can be divided into three steps: Importing metadata: Extracting a one-to-one copy of the metadata from the data source and storing it in the Ingest instance storage as a cache. Synchronizing metadata: Reconciling the imported metadata cache with the working copy in the Ingest instance to handle schema drift. Selecting tables: Curating the list of tables available for selection in Transfer tasks. While this sounds fairly simple, the process involves keeping three versions of the metadata - the data source's original, the Ingest instance's cache, and the Ingest instance's working copy - in sync. This provides flexibility and robustnes
The Get Schema function is not working for the REST endpoints / table flattening. I am trying to use version: 8.0.0.0. I get the following message: Any ideas what is happening?
Have anyone an example on a successful connector regarding a normal Active Directory - not Azure AD. Can’t find any information about it but I guess many have done a connection using TX for that. Regards Anders Bengtsson
Hi, Perhaps I’m overlooking something, but is it possible to include (part of) the file names of Excel sources as a data column? The Excel files are combined using the aggregation function. TimeXtender version 6848.1, Excel Provider 22.0.0.0. BR, Michiel
This is a follow-up of Using XML to ingest data, that i have managed to solve. I need some help with creating a nested statement. The first rsd which lists out all the IDs is this: <api:script xmlns:api="http://apiscript.com/ns?v1" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <!-- See Column Definitions to specify column behavior and use XPaths to extract column values from XML. --> <api:info title="contract" desc="Generated schema file." xmlns:other="http://apiscript.com/ns?v1"> <attr name="contractid" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@contractid" /> <attr name="description" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@description" /> </api:info> <api:set attr="DataModel" value="RELATIONAL" /> <api:set attr="URI" value="https://my.soap.endpoint/service.asmx?WSDL" /> <api:set attr="PushAttributes" value="true" /> <api:set attr="EnablePaging" value="true" /> <api:set attr="Header:Name#" value="SOAPAction"
Hi, We are trying to connect to several CSV files stored in a local folder. While we can successfully synchronize the data source and perform a full load in the ODX, we encounter an error when attempting to add the table to our data area (DSA). The issue lies in the path to the Parquet file stored in Azure. The correct path should be: CSV_DNB/csv_*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet However, the path Timextender is looking for is: CSV_DNB/csv_^*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet It seems that Timextender is misinterpreting the automatically generated name and adds a ^ character. I also attempted to use a specific file aggregation pattern, such as; H100.*.csv (all files in folder have the prefix H100 followed by a random number). However, I encountered the same error. Is there a way to specify the name of the table generated in the ODX? It seems like the “File aggregation pattern” is the issue. Do you have any idea how to fix this? -Execute E
9765 points
3763 points
2728 points
2638 points
1955 points
Learn about troubleshooting techniques
Find a Partner that fits your needs!
Submit a ticket to our Support Team
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.