Solved

Using very big tables

  • 20 January 2024
  • 3 replies
  • 74 views

Userlevel 3
Badge +1

I have a very big table with millions of records.

I’ve divided it in three mappings : aggregate, historical and current.

If I need to do a full load only for the current mapping, what can I do it?

icon

Best answer by Thomas Lind 22 January 2024, 12:00

View original

3 replies

Userlevel 2

I guess you could create three intermediate tables, each with a different selection rule. You can then perform a full load on the current table. Make a new destination table mapping to the three intermediate tables and do an incremental load on that one. Or combine the three tables in one view.

Userlevel 6
Badge +5

Hi @rvgfox 

It will be automatically handled by each individual mapping.

Say you have a setup like this.

I have three data sources, but it could be three unique tables with the same setup in a data source as well. Two of them run incrementally and one fully. As long as the table is not set to Full Load, it will run each mapping with the setup it have in the ODX.

If one of these tables is fully loaded, but the two others haven’t been updated at all and you run it, it will check the incremental value for the table see which table have new rows and then apply only that. Unless it is the table without incremental load that was fully loaded, it will mean that two tables are fully loaded on this transfer.

So ideally all three mapped tables should use incremental load in their setup, even though they do not get executed regularly.

There could be an increase in time spent on data cleansing as it has to go through all PK values and decide if updates are needed, so if that is a too big increase in time, you will need to split it out into three tables and merge them with a view as suggested by @RLB

Userlevel 6
Badge +5

Hi @rvgfox has this issue been resolved? If so can you please help us by marking a best answer above? If you have follow up questions please let us know

Reply