Skip to main content
Solved

Incremental load in TimeXtender REST API 6.1

  • April 15, 2024
  • 5 replies
  • 213 views

Forum|alt.badge.img

Hello,

We are looking to implement incremental data fetching from REST API endpoints, specifically by dynamically filtering with an updatedAt parameter. The goal is to use the maximum updatedAt value previously fetched from each API call, which is stored in our ODX tables.

While I'm aware of the capability to use dynamic parameters based on endpoints and queries within TimeXtender, I'm struggling to find a method to reference the maximum updatedAt value already loaded in the ODX tables for this purpose.

Has anyone implemented a similar dynamic data fetching mechanism in TimeXtender? If so, could you share how you managed to reference data from the ODX tables to set the parameters for the next API call?

Thanks!

Best answer by Thomas Lind

Hi @sigsol 

There currently is no option to set up a REST Data Source to run with incremental load. So you will need to do it another way.

You need to be sure that this UpdatedAt date or another date field becomes a value in the resulting table that is containing the max date of the data pulled. 

Once this is found you run the REST data source once to get the data into the ODX. 

Then you add it to your DW instance as a table, turn on History (SCD) on the table and deploy and executes it once.

Now you can make a dynamic SQL query in your REST data source that looks at this table and pulls the Max UpdatedAt date as a option and it will be used on future executions.

If the result doesn’t contain the UpdatedAt date you will need to either use a dynamic date as the rule and accept that there is some overlap or use the DW_TimeStamp field that is added to all tables.

5 replies

Thomas Lind
Community Manager
Forum|alt.badge.img+5
  • Community Manager
  • 1161 replies
  • Answer
  • April 15, 2024

Hi @sigsol 

There currently is no option to set up a REST Data Source to run with incremental load. So you will need to do it another way.

You need to be sure that this UpdatedAt date or another date field becomes a value in the resulting table that is containing the max date of the data pulled. 

Once this is found you run the REST data source once to get the data into the ODX. 

Then you add it to your DW instance as a table, turn on History (SCD) on the table and deploy and executes it once.

Now you can make a dynamic SQL query in your REST data source that looks at this table and pulls the Max UpdatedAt date as a option and it will be used on future executions.

If the result doesn’t contain the UpdatedAt date you will need to either use a dynamic date as the rule and accept that there is some overlap or use the DW_TimeStamp field that is added to all tables.


Forum|alt.badge.img
  • Author
  • Contributor
  • 26 replies
  • April 17, 2024

It works, thank you @Thomas Lind !

I’m not sure how the IsTombStone flag can know that rows have been deleted when we’re doing incremental load, however. Only solution would be that the source table itself has soft deletes (that updates the UpdatedAt value) and then just use the IsDeleted flag found in the source.

Any idea if support for incremental load in the REST Data Source is on the roadmap?


Thomas Lind
Community Manager
Forum|alt.badge.img+5
  • Community Manager
  • 1161 replies
  • April 17, 2024

Hi @sigsol 

You are welcome. The reason for the soft delete option is that with an History table you can’t delete any rows. So if the source REST API has lost some rows after a transfer it will not be able to see it unless you turn on this option.


andrew.gebhard
TimeXtender Xpert
Forum|alt.badge.img+4
  • TimeXtender Xpert
  • 72 replies
  • August 11, 2025

Hi ​@Thomas Lind,

I do not think that this correctly describes the soft-delete function when using the incremental workaround. Normally, the soft delete would compare all of the primary keys from the data source (full load primary key fields) with the existing primary keys. 

However, with this setup - as far as the history table knows - the hard coded ‘increment’ in the Ingest data source contains the entire set of data. So, with this setup the isTombStone flag will simply flag all of the records which are not contained in the increment. 

Am I missing something here?

This is one of the reasons why the incremental workaround is not a suitable replacement for actual incremental loading; you cannot mark deletes or full load the data on command, without manually adjusting the API parameters. 

Kind regards,

Andrew


Thomas Lind
Community Manager
Forum|alt.badge.img+5
  • Community Manager
  • 1161 replies
  • August 11, 2025

Hi ​@andrew.gebhard 

It will show all fields as soft deleted once you run an incremental load as all the existing rows will disappear from the source.

So yes, it is not correct. I am not sure why I added that at the time.

I do think I heard someone mention that they were working on incremental load for these types of providers, so it may only be a current issue and not one that will continue.

I removed it from my original answer.