Solved

How to manage a new field in the staging area without loosing Data through Deployment

  • 12 October 2023
  • 6 replies
  • 97 views

Hello,

i have the following issue:

i would like to save my tables in the staging area so i can store them (because some of my data in the datasource will be deleted soon).Futhermore in the future maybe new fields will be added to those tables in the datasource and then i would be forced to deploy my tables in the staging area.

Question :  is there a way to keep the old values from that tables and only add the new field to that table without truncating everything with the deployment in the staging area?

ty in advance

FEN

icon

Best answer by rory.smith 13 October 2023, 09:43

View original

6 replies

Userlevel 1
Badge +1

Hi FEN,

Can you provide a little more information about the problem.
What version are you running?
Is it a major upgrade of the data source that leads to the changes?
When you say staging area, do you mean the ODX?

BR
Anders

Userlevel 2
Badge +2

Hi,

 

I think you have 2 options here:

The first option is to use incremental load. By means of this you do not lose any previous loaded rows when deploying (NOT FULL LOAD) the table. However, new fields added to this table won't have values for previous loaded rows.

 

Second option is to use Guard on deployment and/or execution. By means of this, you can specify the tables that shouldn't be deployed and/or executed, hence not losing any previous loaded data.

 

Hope this helps!

 

Kind regards,

Devin

 

Userlevel 5
Badge +7

Hi,

 

from your description I am assuming the data inside your source's tables will be deleted (to make space or something similar) and therefore not be restoreable from source. Over time the structure of those tables may change as well (with new fields only containing data for current records).

 

I would personally use Type I history in this scenario mostly (except for large transactional tables possibly). This will protect data from deletion and work in scenarios where you add fields. Incremental load is not that different from a technical perspective, but history is closer to the meaning of what you are doing. The added fields will not have content for records that no longer exist in source, but you can run a postscript or one-off script to fill in reasonable defaults if those apply. I would avoid the default column transformation in the short term as this does not deal well with changing data types (there is a bugfix coming for that).

As an alternative you can also load the old structure to a table and create a new table for the new structure with a table insert from the old table to load in the old data. Here you can also set defaults for new fields that did not exist prior. While this is very flexible, it is also slightly harder to understand.

 

If your scenario is different, then a different solution will perhaps be better

Hi FEN,

Can you provide a little more information about the problem.
What version are you running?
Is it a major upgrade of the data source that leads to the changes?
When you say staging area, do you mean the ODX?

BR
Anders

Hello Anders,

Ty for your reply. I Think I got my answer partially from rory and devin. Here you will find more details to my question: https://legacysupport.timextender.com/hc/en-us/articles/115000587343-New-field-in-an-incremental-table-doesn-t-populate . im using jet analytics so there is no odx area or version i could name you that you would know i guess. 

Hi,

 

from your description I am assuming the data inside your source's tables will be deleted (to make space or something similar) and therefore not be restoreable from source. Over time the structure of those tables may change as well (with new fields only containing data for current records).

 

I would personally use Type I history in this scenario mostly (except for large transactional tables possibly). This will protect data from deletion and work in scenarios where you add fields. Incremental load is not that different from a technical perspective, but history is closer to the meaning of what you are doing. The added fields will not have content for records that no longer exist in source, but you can run a postscript or one-off script to fill in reasonable defaults if those apply. I would avoid the default column transformation in the short term as this does not deal well with changing data types (there is a bugfix coming for that).

As an alternative you can also load the old structure to a table and create a new table for the new structure with a table insert from the old table to load in the old data. Here you can also set defaults for new fields that did not exist prior. While this is very flexible, it is also slightly harder to understand.

 

If your scenario is different, then a different solution will perhaps be better

Hi rory,

TY for your answer. First of all your assumption is right and the next points you are mentioning are really helpful but i have two more questions:

  1. What is the real benefit of history Type 1 table in Comparison of a incremental one ?  Beside the fact its closer to the meaning. I think then I would need a Full load for the execution  (except i do both in form of a  “HI table”) right ?

    2.   Do I need to rebuild/recreate all fields and structures into a new table for the alternative method ? 

Hi,

 

I think you have 2 options here:

The first option is to use incremental load. By means of this you do not lose any previous loaded rows when deploying (NOT FULL LOAD) the table. However, new fields added to this table won't have values for previous loaded rows.

 

Second option is to use Guard on deployment and/or execution. By means of this, you can specify the tables that shouldn't be deployed and/or executed, hence not losing any previous loaded data.

 

Hope this helps!

 

Kind regards,

Devin

 

ty devin. 

Reply