Skip to main content
Tutorial

TimeXtender REST data source

TimeXtender REST data source
Thomas Lind
Community Manager
Forum|alt.badge.img+5

Warning: This article refers to Jobs and the TimeXtender Jobs API which have been deprecated. Please schedule TDI execution packages via TimeXtender Orchestration rather than using jobs

With the TimeXtender REST data source, you can gain access to REST APIs. Many APIs have a unique setup and use a specific behavior, this article describes the settings to connect to various APIs.

Connect to REST APIs with various authentication methods such as Header options, Bearer Tokens, OAuth, and similar. If the REST API contains multiple resources such as users, posts, and comments you can use it to connect to each one as individual endpoints, by providing the base URL. The data source offers pagination options to connect to each page of an endpoint. In some cases, endpoints require the input of a date or an ID from another endpoint. Configure dynamic values within the data source to handle this requirement.

At the bottom are some real examples showing how to connect to some APIs everyone should have access to. Additionally, they show how to use specific options for authentications and how to use dynamic values.

Content

TimeXtender REST Data Source Settings

Add a data source in the Portal and select the TimeXtender REST data source. Add a name and description for the data source.

Connection Settings

Here you add the base URL for the API to connect to. The base URL would be the bold part of the following URL https://gorest.co.in/public/v2/users. The remainder (i.e. /public/v2/users) is the part that will be used for the Users Endpoint.

Below this is the other configurations you can set besides Endpoints. These will be explained in the Additional Configurations section.

Import Swagger

This feature will become available once you have mapped the data source to an Ingest instance. When you click on the icon the following menu will be shown.

The Import Swagger feature will try to parse OpenAPI definition and auto-create settings and endpoints.  There are three ways to provide an OpenAPI definition.

  • Url: A URL that links directly to the definition. The URL is accessed from the selected Ingest service.
  • Local file path: A path to a file that is accessible from the Ingest service that contains the definition.
  • Json: Copy and paste the definition in this window. Using this has a size limit of 250000 characters.

Click Next to start the processing of the definition. On the next page select what to import.

Additional Configurations

Authentication

Click on the Authentication step to make the option appear.

Setup Authentication by selecting one of the following options from the dropdown: None, Basic, Bearer, OAuth2, and OAuth refresh token. Some authentication options such as APIKey header need to be set up as a header instead.

  • None - No authentication will be done with this section
  • Basic - Authenticate using a username and password
  • Bearer - Authenticate using a non-expiring Bearer Token
  • OAuth2 - Authenticate using OAuth 2.0 to gain access

    You will need to apply an Access URL that returns the token. Then you need to apply how the Send request is done as either Credentials in the body or a Basic authentication header. You also need to choose how the Authorization token is returned, either in a Request header or in a Request URL. You can apply parameters by clicking on the Add button. These could be client_Id and client_secret. You have an option to set the value as Is sensitive to make it obscured when it is previewed.

  • OAuth refresh token - Use this option when a refresh token is needed to maintain access

    This is a similar setup to the OAuth2 option from above, the difference is that you can set a Refresh token working directory path to store these in.

    The Initial access token is the value of the initial access token. The Initial access token created is the UNIX timestamp in seconds for when the initial access token was created. The Initial access token expires is how long the initial access token is valid (from when it was created), in seconds.

  • Authentication endpoint - Authenticate using an endpoint
    The Authentication endpoint uses any endpoint defined in the data source to perform steps to authenticate. The selected endpoint will execute as any other endpoint with all the same features.

    Start by selecting an Endpoint from the dropdown. Only one endpoint can be selected, but the endpoint can be dependent on other endpoints, the dependent endpoints will be executed automatically.

    Then in the Endpoint data section, add different sources of data to get from the selected endpoint after it has executed.

    The Name entered here will be available as a dynamic value in the Apply data section. The Type is either XPath or Header. When using XPath, it will extract data using XPath from the result body of the request or in the flattened data. Select a Table flattening name if the data the XPath needs to apply to is from a table flattening.

    The Manipulation query can be used to alter the data extracted in the Endpoint data section. This is a SQL query running in SQLite. The endpoint data can be referenced in the query by using curly braces and the endpoint data name: "select '{MyHeaderValue} some change' as AlteredMyHeaderValue ".

    The Apply data section defines where and how the values from endpoint data will be used. 
    Apply as dynamic values: This will make the named endpoint data and the columns from the manipulation query available to use as dynamic values in all other endpoints. 

    Add as header: This will add all the headers in the list to all requests in all other endpoints.

    Add as query parameters: This will add all the parameters in this list to all requests in all other endpoints. 

    Behavior defines when to execute the authentication endpoint. 
    Run once and on demand: Will run before the first request, and will execute again only if the request returns a 401. This should be sufficient for most use cases. 
    Run every x minutes: This will run before the first request and then wait for the specified amount of minutes before it executes again (during execution). 
    Run before every request: This will run before the first request and before every subsequent request.

Headers

Add all the headers the API requires to return the desired output.

Pagination

Many APIs have limits on how many rows are returned and this option will help ensure all rows are returned.

  • Parameters - Add the necessary parameters such as page or page size.

    Set the type of parameter that contains the value. It can be XPath which will be similar to /meta/pagination/page or Header which could be links-next. If any of the parameters are not found, the pagination will stop. 

  • SQL Expression - Create a SQL query that returns some value. Similar to the above this requires setting up a parameter action to handle what this returns. Add a Connection string to the server and enter the username and password. Set SQL or Snowflake as the target type to run the query against.

  • Built-in value options - There are two options to help with increasing a value to use for pagination. These are {TX_NextPage} and {TX_CurrentPage}. They both are usable as the value option in query parameters such as page parameter equal to {TX_NextPage} or in a SQL query where you increase the current value with a number such as SELECT {TX_CurrentPage} * 1000 AS offsetValue. On a given page where you used both options for some reason and {TX_NextPage} is equal to 2 then {TX_CurrentPage} would be equal to 1. There is two guides in the bottom of this guide that show how to use either one.

  • Apply parameters - You have some options that can be set to handle what is returned from a parameter and an SQL Expression.

    • Don’t do anything - If you select none of the options
    • Replace URL - This will replace the current URL with the returned value
    • Add as query parameter - Select this to add the value such as page number to the URL

    • Replace post body - Select this to configure how the post body shall be like on all subsequent requests. This is done on each endpoint. It is enabled generally, but you need to configure the "replace post body" for each endpoint you have)

Connection variables

Connection variables are values that are available and can be used in each endpoint, similar to how dynamic values work within one endpoint (see the Dynamic values section for more information). Note that if a connection variable is set to Is Sensitive the values cannot be used in queries and table flattenings.

Caching

The caching options decide how the downloaded data is handled. If In Memory is selected, all the downloaded data will be kept in memory. Processing, such as XSLT will also happen in memory. If the downloaded data is large, the machine can run out of memory. If this happens, switch to File type caching.

Besides saving memory, storing it in a file helps identify the output being returned and assists in troubleshooting when completing the initial setup.

Adding a cache path as shown above will store all the files in a folder named after the data source.

Request limits

Some APIs have limits on the amount of calls being allowed and this setting helps to manage this. You can set limits for requests per secondminutehour, or day.

Misc

If necessary, you can set up retries and timeouts for the API calls

Debug Logging

When debug logging is enabled, the data source will write a detailed log file to disk in the path specified. The log file can be inspected to determine how complex endpoints behave. It can also be a useful tool to provide support with detailed information that can otherwise be difficult to obtain. This should only be enabled when needed, as the files can get very big.

Once checked you can add a path to the folder you want to store the file in. The file will be named after what data source it was and when the task was started.

Endpoints

An endpoint is a specific part of the API that is being accessed. As explained in the Base URL section, it will be what is after the base. Most APIs will have a list of these documented somewhere so you can know what options are available.

The data source will not work if you do not have an endpoint added.

When you set the path of an endpoint, for example, API/users, the name will automatically be applied as users. You can then edit the name afterward to make it Users or All Users.

Click Add endpoint to further configure the endpoint

Data Format

Data Format defines what data format the REST endpoint returns. There are several choices:

  • Auto: This will automatically determine if the data format is JSON or XML. If it is unable to classify it as JSON or XML, it will default to Text.
  • JSON: Force the data source to read the data as JSON.
  • XML: Force the data source to read the data as XML.
  • CSV: Force the data source to read the data as CSV. When selecting this format, a few settings will appear below the dropdown box.
  • Text: Force the data source to read the data as text. When reading as text it will take the whole data content from the data source and save it without modification as a single row single column.

Dynamic values

There are three options

The Query option can be used to loop over a set of values. Each row in the dynamic values will be one REST call (or many, if there are multiple pages). They can be used by referencing a column name in curly brackets, for example: {MyColumn}. The dynamic values can be used in Query Parameters (value), Post Body, Headers (name and value), Pagination (SQL Expression, Replaced URL and Replaced Post Body), Endpoint Path, and Table Flattening (Xslt).

Below is a simple example of how to use a SQL query to set dynamic values, where yesterday's date is used as an endpoint value.

Notice the path is the name of the endpoint in curly brackets {yesterday} as long as my SQL query returns this name it will be applied.

If a date range is needed (i.e. a start and end date with individual values) it could be done similarly, by ensuring that two dates are returned by the SQL query.

Changing the Dynamic values source to Endpoint query lets you select one or more Source Endpoints (and optionally a Table flattening).

The data from these endpoints can then be queried using a SQLite Query. The result of this Query is then used as dynamic values. Note that all columns will be nvarchar and the table names will match those seen in TDI.

You can also change the Dynamic values source to be From the endpoint table instead. Here you choose an already added data source endpoint. Similar to when using a query a part stored in curly brackets {} will be what it attempts to change. Here it is important that you add a field that exists in the endpoint you point at.

Here below I point at an All jobs endpoint that has a table called All Jobs_TX_Root with two fields Id and Name. So in my setup, I have added {Id} as the value to change. You need to be sure it is in the same case setting as well, adding id or ID would not work.

Query Parameters

This is an option to add additional values that are applied to the URL such as an access_key value or similar.

It will be applied like ?access_key=<value> and if you have more than one parameter with & signs between each.

Table flattening

A table flattening is an XSLT Transformation that can be used to transform (most notably to flatten) the data returned from the API.

You can use the Table builder to generate an XSLT document in the field and the name it has will become a table.

The flattening helps get fields from other areas into a table that would normally not have them.

How this works is explained here.

HTTP request

  • HTTP method - Set this to GetPost, or Put to decide how the call is being done. When this is set to Post you will have to add how the post body will look.

Search and replace

Search and replace can be used to perform one or more regex replace operations on the raw data before it is loaded or used to interpret metadata. This can for example be useful if the data contains some special characters that are not handled correctly.

Override headers

This can be used to override existing headers to values that are set for this endpoint.

Override pagination

This is similar to the data source pagination section, only here it is done to override the existing setup.

Misc

  • Perform exhaustive metadata scan - If this is checked, all pages and all dynamic values (if any) are executed during metadata operations. If not, only the first page and first dynamic values are executed.
  • Use endpoint name as schema name - This feature will add the name of the endpoint as the schema name. This is beneficial when the data source returns data with the same format despite using different endpoint paths. Be aware that if this is checked after the initial synchronize task has been done it will generate a new table in the data source.
  • Empty fields as null - checking this will convert empty fields to null.

Override caching

This is similar to the data source caching section, only here it is done to override the existing setup.

Delay

Here you can configure various delays for an endpoint. The Delay between requests in milliseconds will add a delay between each request to an endpoint. The Delay before first request in milliseconds will add a delay before executing the first request to an endpoint. Empty or zero means no delay for both types.

Real examples to use for TimeXtender REST data source

Dynamic value from SQL query

  • Here is an article showing how you can connect to a specific API called Fixer IO. The guide has many specific examples which use some of the options explained. Specifically, it uses SQL queries and XSL in the setup to gain the most from the API.

    Connect to Fixer IO

Dynamic value from endpoint table

  • Here is an article showing how you can connect to our own TimeXtender API. The guide has many specific examples which use some of the options explained. Specifically, it shows how to use dynamic values from another endpoint and it shows how to apply a header for authorization.
    Connect to TX API

Applying pagination, request limits, and using a bearer token for authentication

  • Here is an article showing how you can connect to a free API called GoREST. The guide has many specific examples which use some of the options explained. Specifically, it shows how to set up Pagination based on a header, how to use request limits, and how to apply a bearer token for authentication.
    Connect to GoREST

Using OAuth 2.0 client authentication and doing token-based pagination

  • Here is an article showing how you can connect to the Microsoft Graph API. The guide has many specific examples which use some of the options explained. Specifically, it shows how to set up Pagination based on a URL located in the result and how to use OAuth 2.0 Client authentication.
    Connect to Graph API

Using {TX_NextPage} for pagination when no next or last page option is known

  • Here is an article showing how you can connect to the openbrwerydb API. The guide has many specific examples which use some of the options explained. Specifically, it shows how to set up Pagination based on page and page size when no last of next page options is known.
    Connect to openbrewerydb

Using {TX_CurrentPage} for pagination when no last offset or next starting point is known

  • Here is an article showing how you can connect to the soda.demo.socrata.com API. The guide has many specific examples which use some of the options explained. Specifically, it shows how to set up pagination based on an offset and limit when no info about what the last offset value will be.
    Connect to soda Socrata
Did this topic help you find an answer to your question?

12 replies

Forum|alt.badge.img+2

This is huge! GREAT JOB!


Forum|alt.badge.img+2
  • Contributor
  • 52 replies
  • March 14, 2024

@Thomas Lind or @Christian Hauggaard, the new TimeXtender REST data source looks very promising and I am currently trying to set up an API with dynamic values. The dynamic values need to be applied in the second endpoint, using the output of the first endpoint.

The first endpoint needs to be flattened first but I do not really get the hang of the XLST format. Your example regarding the rates is clear, but what do I need to use in the XLST in order to flatten the table? Would that be this complete string? 

<xsl:stylesheet version="1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" /> <xsl:template match="/"> <xbi_root> <xsl:apply-templates select="/TX_Autogenerated_Root/TX_Autogenerated_Element/rates" mode="rows" /> </xbi_root> </xsl:template> <xsl:template match="/TX_Autogenerated_Root/TX_Autogenerated_Element/rates" mode="rows"> <Rates> <xsl:element name="ALL"> <xsl:value-of select="ALL" /> </xsl:element> <xsl:element name="DKK"> <xsl:value-of select="DKK" /> </xsl:element> <xsl:element name="EUR"> <xsl:value-of select="EUR" /> </xsl:element> <xsl:element name="GBP"> <xsl:value-of select="GBP" /> </xsl:element> <xsl:element name="date"> <xsl:value-of select="../date" /> </xsl:element> <xsl:element name="timestamp"> <xsl:value-of select="../timestamp" /> </xsl:element> </Rates> </xsl:template> </xsl:stylesheet>


Hello I'm getting an error Failed to execute endpoint: The ‘:’ charactor, hexadecimal value 0x3A, cannot be included in a name.

The Rest API is returning json:

[
  {
    "TicketTypeId": "e75de0ff-0e16-495b-89bc-81e964fdb068",
    "HandlerId": "5080e803-5d47-4a31-8876-0573c053ad12",
    "HandlerType": "Employee",

 

How to select json, instead of xml? Or is this not possible?


Thomas Lind
Community Manager
Forum|alt.badge.img+5
  • Author
  • Community Manager
  • 1017 replies
  • May 17, 2024

Hi @Jeroenski74 

The provider can read JSON, but not if any of the field names contain a invalid character like : as it gets converted to XML.
What field is it that contains a : character in its name? It does not seem like the ones you shared.

In any case, you can use the Table builder tool, link here

To change the name or contents of fields if they can’t be read.


Hi Thomas,

Thanks for the reply, I also found out that JSON was not the problem, though “:” shouldn't return in a response of an API in a name, but that is besides the point and will be resolved. I have tried to use the table builder by only selecting the fields without the : in de name or renaming but that didn't do the trick (same error) Also in the Builder it already gives an error:

[
  {
    "TicketTypeId": "1",
    "HandlerId": "",
    "Consumer dispatched to:": "",
    "Number": "1009",
    "CreatedDate": "2024-05-16 10:42:55",
    "ReceivedDate": "2024-02-28 08:55:47",
    "DueDate": "2024-05-24 10:42:55",
    "ClosedDate": "2024-05-16 10:42:55",
    "Description": "",
    "Result": "",
    "Priority": "",
    "State": "",
    "ThirdParty1Name": "",
    "ThirdParty1Address": "",
    "ThirdParty2Name": "",
    "ThirdParty2Address": "",
    "ThirdParty3Name": "",
    "ThirdParty3Address": "",
    "ThirdParty4Name": "",
    "ThirdParty4Address": ""
  }
]

 


Thomas Lind
Community Manager
Forum|alt.badge.img+5
  • Author
  • Community Manager
  • 1017 replies
  • January 29, 2025

Hi

We have just released version 8.0.0.0

Remember to update your other programs as well to be able to handle the new features.

 

The new features that have been added to this feature is the following.

  • Import Swagger
  • The new Authentication type called Authentication endpoint 
  • Global Values 
  • Debug Logging 
  • Two new Data Format options such as CSV and Text and the ability to force the type. 
  • A new Dynamic Value option called Endpoint Query.
  • Search replace values 
  • Add a Delay on requests.

Together with the export import feature this makes it easier to set up new REST data sources.


  • Explorer
  • 6 replies
  • January 31, 2025

Hi Thomas, thanks for the update! We will give it another try!


  • Contributor
  • 5 replies
  • February 26, 2025

Hi Thomas,

 

I want to use a GraphQL data source. It is using the REST  also:

Is there a sample how to do this?

 

I can connect to the GraphQL API using postman, but in the TimeXtender Data Source I am missing fields for the Query to be executed, the variables, and also there is no endpoint I think, which is mandatory.

 

Is this documented in some way?

 

Kind regards,

Pieter


Thomas Lind
Community Manager
Forum|alt.badge.img+5
  • Author
  • Community Manager
  • 1017 replies
  • February 26, 2025

Hi ​@PieterQ 

Others have used GraphQL. You can do Post calls in the endpoint, you can add the query in the field called Post Query. You need to be sure it is converted to a JSON query, or it will not work.

By endpoint, you need to add something. Your base URL can’t only be https:www.API.com and everything after that can be used as an endpoint. You will likely use the same endpoint, but with different names and post queries.

There will be made a proper GraphQL guide eventually, I just haven’t had time to write it yet.


  • Contributor
  • 5 replies
  • February 27, 2025

Hi Thomas,

I Understand you did not have time to fully document it, but if you have some sample (maybe from your own test environment) it would be nice.

My query is JSON format, the only thing I am looking for is where to put the query and the variables.

 

In Postman it looks like this:

Kind regards,

Pieter


Thomas Lind
Community Manager
Forum|alt.badge.img+5
  • Author
  • Community Manager
  • 1017 replies
  • March 5, 2025

Hi

We have just released version 9.0.0.0

Remember to update your TDI and TIS programs to versions able to handle this version.

The new features that have been added to this feature is the following.

  • A change of the UI, so it now is easier to survey your setup
  • A new version of the table builderTable Builder

  • Contributor
  • 5 replies
  • March 6, 2025

Thanks Thomas!

 

I managed to get the REST working on GraphQL, some headers were mandatory (Content-Type):

 

 

Also, the POST body must be in JSON format (on HTTP POST), so it looks like this:

 

{ "query": "query items($type: ItemType!) { items(type: $type) { totalCount nodes { id type key name completion lastCatalogMetadataUpdate descriptionV2 { summary content { content } } } } }", "variables": { "type": "visualizations" } }

 

 


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings