We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.
We are currently testing the new version 7017.1 to see if we can migrate our existing version 20 project. We have 30 exactly the same Business Central environments running and in version 20 we had 1 adapter and then added the other 29 companies as an additonal adapter. I was hoping that by now some sort of functionality would have been implemented, however i can only find the clone function that we tried last year. The clone function is nice for the first time to create a new data source but when you have to maintain all the data sources it would be a lot of work to go and add new tables or fields on all the 30 data sources. Is this kind of functionality being worked on or are we forever stuck on the classic version?
Prerequisites Create an Azure App Registration The App Registration should use the Microsoft Graph API with the following permissions: Finding the List ID Use Postman to find the Site ID. Use the Site ID to find the List ID. You can find these by using Postman and by looking at the Graph API collection Graph Fork . You can use the app to connect to this and locate the SharePoint folder under Application. In there you have two requests, https://graph.microsoft.com/v1.0/sites and https://graph.microsoft.com/v1.0/sites/{{SiteID}}/lists . Use sites to find the site ID and use the SiteID to find the List ID. You can also find the ListID by navigating to the SharePoint list and then clicking List settings and then copying the ListID from the URL Use the TimeXtender REST data source connection Connection Settings Use the following Base URL (where {{SiteID}} is the Site ID found in Postman): https://graph.microsoft.com/v1.0/sites/{{SiteID}}/lists Endpoints Create the endpoint using the followi
At times you may want to share a specific data source setup with someone else. Here is how this can be done. Create the project with the data source Export the project and note the version of the project Import the project Create the project with the data source Lets say you have a REST data source made with the Enhanced TX REST data source provider and it needs to be shared with someone without access the the machine where TX is installed. First you note the specific data source, or sources if additional ones are necessary. Then you use the save as option. When you use this method you create a new project with the same setup the only difference is the name of it. After creating the new project there still are some things to do before you are ready to export this. First you delete the unnecessary parts such as Semantic layers and Data warehouses. Delete all execution packages, so only the default one remains. Finally you delete all the unneeded data sources and any custom tables added
Use deliver instance endpoint name variable, in combination with data selection rules, to perform different data selections for different endpoints in the same deliver instance. This way, each department can get their own endpoint with a subset of data relevant to their division. This can help improve load performance, report performance and help manage costs by keeping individual models smaller. This also helps avoid creating multiple deliver instances for different departments. How to filter use data selection rules for specific endpoints Create a Deliver instance with multiple endpoints Close all instances in TimeXtender Data Integration and then open the Deliver instance in TimeXtender Data Integration and click Tools > Instance Variables. Then click Add, and add an instance variable of type Destination Scope, setting the Value Filter to Endpoint and setting the Value to Name. Then click OK and close. Right-click the table you would like to create filter on and select Add Data Sele
There is a free currency API called Fixer IO https://fixer.io/ You can connect to it by generating an API key and generating an account is free. The documentation can be found here https://fixer.io/documentation This guide is for learning how to set up a REST data source, how to generate dynamic values from SQL queries, and how to use additional parameters for your calls to get different info out of it. Content Content Set up Fixer IO account Setup Common settings Setup Request limits Setup endpoints Latest - Gives today's rates and shows how to apply parameters Yesterday - Gives historical rates based on a specific date. Uses SQL query to dynamically apply yesterday as the date Timeseries - Gives historical rates based on a start and end date. Uses SQL query to generate dynamic dates Custom range endpoint - Use a function in DW Instance to generate a range of dates to loop through Use XLS to do table flattening Use Unpivot on your XLST Change the content of the rates to show decimal p
Hi. Is it possible to move my current community account to my new e-mail address? I could not find anywhere to update this on my profile.
I’m looking for a TimeXtender Fabric Lakehouse (pyspark.sql) expert 😀 I’ve got a SQL transformation that selects the maximum value of 10 columns: (select max(val) from (values ([Column_1]), ([Column_2]), ..., ([Column_10])) as [Values](val)) How do I achieve this in a Prepare Lakehouse custom transformation?I could build the mother/father of all massive case statements, but I’d prefer something simpler and more elegant … if possible!?
I'm using dynamic values function in the TX REST connector 9.1.0.0. I use id's from another endpoint to loop through in my second endpoint path. This works well when I use “From Endpoint Table” but now I want to add a filter to only get the id's with a flag “hasresponse=true”. I've read the page but I still get an error with my Endpoint query. “No such table” the error message says. I've tried several things like adding a schema. But all with the same response. Is there something wrong with my syntax? Error:
Today, we released a minor version of TimeXtender Data Integration (v. 7047.1) with the changes listed below. We recommend that you upgrade if you’re affected by any of the issues fixed. Fixed Fixed an issue for Prepare instances using Snowflake storage where Snowflake would throw an error on deployment when using differential deployment. Deployment would fail on Prepare instances using Snowflake or Data Fabric storage if custom data was present. Custom data is only supported on SQL storage, so this issue would appear when changing storage from a SQL Server storage with custom data. It’s now possible to delete custom data when it is not supported by the storage type. For Prepare instances, fixed an issue where the ‘DW_SourceCode’ field was missing in the primary key check. For Deliver instances, updated the logic for the PowerBI endpoint connection string, so you can now use the Gateway in the PowerBI portal.
Hi, We are using a REST data source with dynamic values to fetch data for approximately 12,000 records, where each record corresponds to an individual API call. Occasionally, we encounter issues with the API provider where some calls return a 400 error. Unfortunately, it’s unpredictable which dynamic values will cause these errors. Currently, it appears that the TimeXtender REST data source stops processing upon encountering a 400 error, resulting in a "Completed with errors" status. Is there a way to configure the REST data source to continue processing the remaining dynamic values, skipping over those that result in a 400 error? Ideally, it would also log the calls that failed for review. Best regards,Pontus
Good Day, We have a client that is on V20 that would like to incorporate some Sensitive Data within their TX environment that some of the TX Developers should not have access to. As far as I am aware, if you have access to the TX Repository then the developers can open all projects. How would I handle the solution of the Sensitive Data in TX? Kerry
I have a client that wants to explore a significant rework of their Prepare instance. They want to duplicate their existing Prepare instance in Dev to create a sandbox instance. Is there a recommended approach to achieve this? They are using version 6898.1
Hi,what is the best (performing) way to extract data from SAP datasphere?I read about an API connection (OData), but is this the ‘optimal’ way to do this ? https://community.sap.com/t5/technology-q-a/how-to-export-data-from-sap-datasphere-or-its-database-sap-hana-cloud-to/qaq-p/13708728 https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html Best Regards,Peter
This is a follow-up of Using XML to ingest data, that i have managed to solve. I need some help with creating a nested statement. The first rsd which lists out all the IDs is this: <api:script xmlns:api="http://apiscript.com/ns?v1" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <!-- See Column Definitions to specify column behavior and use XPaths to extract column values from XML. --> <api:info title="contract" desc="Generated schema file." xmlns:other="http://apiscript.com/ns?v1"> <attr name="contractid" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@contractid" /> <attr name="description" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@description" /> </api:info> <api:set attr="DataModel" value="RELATIONAL" /> <api:set attr="URI" value="https://my.soap.endpoint/service.asmx?WSDL" /> <api:set attr="PushAttributes" value="true" /> <api:set attr="EnablePaging" value="true" /> <api:set attr="Header:Name#" value="SOAPAction"
Hi, We are trying to connect to several CSV files stored in a local folder. While we can successfully synchronize the data source and perform a full load in the ODX, we encounter an error when attempting to add the table to our data area (DSA). The issue lies in the path to the Parquet file stored in Azure. The correct path should be: CSV_DNB/csv_*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet However, the path Timextender is looking for is: CSV_DNB/csv_^*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet It seems that Timextender is misinterpreting the automatically generated name and adds a ^ character. I also attempted to use a specific file aggregation pattern, such as; H100.*.csv (all files in folder have the prefix H100 followed by a random number). However, I encountered the same error. Is there a way to specify the name of the table generated in the ODX? It seems like the “File aggregation pattern” is the issue. Do you have any idea how to fix this? -Execute E
10586 points
3891 points
2789 points
2734 points
1955 points
Learn about troubleshooting techniques
Find a Partner that fits your needs!
Submit a ticket to our Support Team
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.