We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.
The Table builder is a powerful tool in the REST and JSON & XML data source providers that you can use to transform nested XML or JSON data into a flattened table with columns and rows by using simple drag-and-drop functionality to define XSLT code. This guide will walk you through the various features and steps involved in using the table builder. To use the table builder, you need a REST or JSON & XML data source set up and mapped to a running Ingest instance. Add table flattening The first step is to add a flattened table to an endpoint. Open your data source, expand the endpoint, click Table flattening under Additional configurations and then click Add. Give your table flattening a name (this will be your resulting flattened table name) and click the Open button to open the table builder interface. The table builder interface consists of 5 different sections: Input: Here, you paste in the XML or JSON you want to flatten and also select the applicable data structure for your input d
Hi, I have two questions regarding Orchestration, schedule groups. (1) There is the option to make a schedule based on a SQL custom script. When I try to test this functionality out with the following statement: SELECT CAST('2025-06-10 01:00:00' AS DATETIME) It gives me the following pop-up: Does someone know why I don't get query result = '10-06-2025 01:00:00’? The SQL statement produces a valid datetime value in SMSS. (2) I need to schedule a job that runs four times per day for a project. I wanted to build this schedule with the custom SQL option and by using a calendar table in our database from the Prepare Instance. Is this possible? Thanks in advance!
With the TimeXtender REST data source, you can gain access to REST APIs. Many APIs have a unique setup and use a specific behavior, this article describes the settings to connect to various APIs. Connect to REST APIs with various authentication methods such as Header options, Bearer Tokens, OAuth, and similar. If the REST API contains multiple resources such as users, posts, and comments you can use it to connect to each one as individual endpoints, by providing the base URL. The data source offers pagination options to connect to each page of an endpoint. In some cases, endpoints require the input of a date or an ID from another endpoint. Configure dynamic values within the data source to handle this requirement. At the bottom are some real examples showing how to connect to some APIs everyone should have access to. Additionally, they show how to use specific options for authentications and how to use dynamic values. Content Content TimeXtender REST Data Source Settings Connection Set
One of the most powerful advantages of the TimeXtender software is how it uses project metadata to "remember" all of the objects in a project and their relations to one another. Another of TimeXtender's strengths is its ability to incorporate customized SQL code wherever the native functions are not sufficient to accomplish a task. Through the use of script parameters, both of these advantages can be combined. What is a script parameter? Creating a parameter Understanding parameters Mapping or renaming parameters Viewing your parameters Removing parameters Manual delete Automatic removal Limitations of script parameters What is a script parameter? A script parameter creates a link between the script and a TimeXtender object. This link has two primary benefits. One benefit is that the link includes parameterized code in data lineage and impact analysis, and enables the application to automatically manage some object dependencies. The other main benefit is that the the parameter replaces
If you want to give a AzureAD user account rights on an SQL Server you will find it difficult to add, compared to normal. How to add the AzureAD user to the SQL Server If you try to add it, it might look something like this. And when you search for the user, you get nothing as well. What you do is simply write the user account name into the Login Name field and press OK. Watch the video to see how. Add AzureAD user Afterwards you can add the server roles and User Mapping like you normally would. How to give the AzureAD user rights on the Analysis Server You also have a Cube, or Tabular model and immediately you have an issue, because the trick that worked before, is not possible. As you can see there is an add button and if you write your azureAD user the same way, it will not allow this and give the following error. What you need to do, is to add a user group that the user automatically is part of. One user group that has this is called Interactive. So use that and it will work. How t
When you are creating custom scripts or views you can use a parameter that converts all instances of that name to what it contains and set it to what the actual name is. The bonus of this is that you can change the name of fields and tables, without having to change anything in the script. It will also make it possible to auto map custom views. If you choose to delete something that uses the view, you will get a notification about this as well. Parameters In essence you can use parameters every time you have this screen. You get that every time you do something custom. Below are a list of features that use this, or variations of this screen: Custom Views Script Actions Stored Procedures User defined functions Custom Table Inserts Custom Field transformations Custom Data selection rules Deliver Instance Custom Scripts Calculated Measures Derived Measures Script Commands Dynamic variables You can use it in many ways. Add parameters to a view or script Say you want to join two tables via
Hi, I'd like to make a new data source connection using REST to connect with our Azure storage table. Thus far I have managed to get the connection setup, however pagination remains a issue. When testing our connection and following the steps in the debug logging file I see that when a table has a next: "x-ms-continuation-NextPartitionKey” and "x-ms-continuation-NextRowKey” it will return this in the header (not the body). But if the table doesn't have this it will stop since the "x-ms-continuation-NextPartitionKey” header was not found. As I am applying these pagination parameters as query parameter I need to be able to solve this issue. Is there maybe a way to apply a default value to these variables (manipulating query doesn't work as the variable itself already won't be found) or some way to dynamically replace a URL? Example table with no NextPartition/rowkey: When variable value replaced with default: Thanks for any input someone can provide. Kind regards, Robbert
I am loading a table incrementally into my ODX storage in an Azure Data Lake, where new Parquet files are added daily. This approach is because the source only holds data every two weeks, and I want to maintain a log in the ODX. The Parquet storage is very compact. However, for downstream analysis, I only need to retrieve data from the last 1 to 2 days into my prepare instance. I am using a data selection rule on the mapping, and I have also tried applying it directly on the table. Both approaches take a very long time to complete (+1 hour), whereas running the same query on the source SQL database filtering for 2 days of data completes in about 10 seconds. I suspect that the prepare instance is scanning through all the Parquet files, including older days, causing the slow performance. My question: Is there a way to configure the TX prepare instance to only process the most recent X Parquet files (e.g., the last 2 days) instead of scanning all files? This would significantly improve th
Dynamic project variables allow you to create a SQL script that returns a single value, then use that value anywhere in your project, from data selection rules to execution package conditions. In this example, we will create a variable that returns the current date. Set up a dynamic variable How to use your variable Create a Fixed GetDate() variable Add a variable in another dynamic variable script Use the Value option Apply the variable as a data selection rule Set up a dynamic variable To create a dynamic project variable, right-click on the top-level project node and select Project Variable. In the Project Variable window, click Add. Give your variable a friendly name and select Dynamic from the Type dropdown. Choose when to resolve the variable from the Resolve Type dropdown menu. There are three types: Every time: The variable will be recalculated every time it is called. This is the default value. One Time: The variable will be calculated once, the first time it is called. That v
Hello. We are working on adding PBI datasets to the Orchestration process. I have followed these instructions and up until no. 3 in the Power BI part, everything has been a success. However, I’m supposed to choose Orchestration > Packages, select New > Power Bi Refresh. But Power BI Refresh is not an option: Is there a particular reason for this? I have added Power BI Refresh to the workspace (as a member currently, does it have to be an admin? That is not specified in the instructions at least.) I’m on TimeXtender Data Governance version 24.3.0.87 Best regards.
I'm using dynamic values function in the TX REST connector 9.1.0.0. I use id's from another endpoint to loop through in my second endpoint path. This works well when I use “From Endpoint Table” but now I want to add a filter to only get the id's with a flag “hasresponse=true”. I've read the page but I still get an error with my Endpoint query. “No such table” the error message says. I've tried several things like adding a schema. But all with the same response. Is there something wrong with my syntax? Error:
Hi,what is the best (performing) way to extract data from SAP datasphere?I read about an API connection (OData), but is this the ‘optimal’ way to do this ? https://community.sap.com/t5/technology-q-a/how-to-export-data-from-sap-datasphere-or-its-database-sap-hana-cloud-to/qaq-p/13708728 https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html Best Regards,Peter
This is a follow-up of Using XML to ingest data, that i have managed to solve. I need some help with creating a nested statement. The first rsd which lists out all the IDs is this: <api:script xmlns:api="http://apiscript.com/ns?v1" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <!-- See Column Definitions to specify column behavior and use XPaths to extract column values from XML. --> <api:info title="contract" desc="Generated schema file." xmlns:other="http://apiscript.com/ns?v1"> <attr name="contractid" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@contractid" /> <attr name="description" xs:type="string" readonly="false" other:xPath="/Envelope/Body/contracts/contract@description" /> </api:info> <api:set attr="DataModel" value="RELATIONAL" /> <api:set attr="URI" value="https://my.soap.endpoint/service.asmx?WSDL" /> <api:set attr="PushAttributes" value="true" /> <api:set attr="EnablePaging" value="true" /> <api:set attr="Header:Name#" value="SOAPAction"
Hi, We are trying to connect to several CSV files stored in a local folder. While we can successfully synchronize the data source and perform a full load in the ODX, we encounter an error when attempting to add the table to our data area (DSA). The issue lies in the path to the Parquet file stored in Azure. The correct path should be: CSV_DNB/csv_*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet However, the path Timextender is looking for is: CSV_DNB/csv_^*/DATA_2024_11_28__11_09_50_2219585/DATA/DATA_0000.parquet It seems that Timextender is misinterpreting the automatically generated name and adds a ^ character. I also attempted to use a specific file aggregation pattern, such as; H100.*.csv (all files in folder have the prefix H100 followed by a random number). However, I encountered the same error. Is there a way to specify the name of the table generated in the ODX? It seems like the “File aggregation pattern” is the issue. Do you have any idea how to fix this? -Execute E
10530 points
3890 points
2777 points
2732 points
1955 points
Learn about troubleshooting techniques
Find a Partner that fits your needs!
Submit a ticket to our Support Team
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.