Solved

Incremental load in TimeXtender REST API 6.1

  • 15 April 2024
  • 3 replies
  • 44 views

Hello,

We are looking to implement incremental data fetching from REST API endpoints, specifically by dynamically filtering with an updatedAt parameter. The goal is to use the maximum updatedAt value previously fetched from each API call, which is stored in our ODX tables.

While I'm aware of the capability to use dynamic parameters based on endpoints and queries within TimeXtender, I'm struggling to find a method to reference the maximum updatedAt value already loaded in the ODX tables for this purpose.

Has anyone implemented a similar dynamic data fetching mechanism in TimeXtender? If so, could you share how you managed to reference data from the ODX tables to set the parameters for the next API call?

Thanks!

icon

Best answer by Thomas Lind 15 April 2024, 11:57

View original

3 replies

Userlevel 6
Badge +5

Hi @sigsol 

There currently is no option to set up a REST Data Source to run with incremental load. So you will need to do it another way.

You need to be sure that this UpdatedAt date or another date field becomes a value in the resulting table that is containing the max date of the data pulled. 

Once this is found you run the REST data source once to get the data into the ODX. 

Then you add it to your DW instance as a table, turn on History (SCD) on the table and deploy and executes it once.

Now you can make a dynamic SQL query in your REST data source that looks at this table and pulls the Max UpdatedAt date as a option and it will be used on future executions.

If the result doesn’t contain the UpdatedAt date you will need to either use a dynamic date as the rule and accept that there is some overlap or use the DW_TimeStamp field that is added to all tables.

UPDATE

With the history table set up you should also apply soft deletes, so you can see when rows are removed from the source.

If you want to avoid these in the following Data Areas, you can apply a rule to only transfer data where IsTombStone is equal to 0 or False.

It works, thank you @Thomas Lind !

I’m not sure how the IsTombStone flag can know that rows have been deleted when we’re doing incremental load, however. Only solution would be that the source table itself has soft deletes (that updates the UpdatedAt value) and then just use the IsDeleted flag found in the source.

Any idea if support for incremental load in the REST Data Source is on the roadmap?

Userlevel 6
Badge +5

Hi @sigsol 

You are welcome. The reason for the soft delete option is that with an History table you can’t delete any rows. So if the source REST API has lost some rows after a transfer it will not be able to see it unless you turn on this option.

Reply