Question

Execution jobs and Supernatural Keys


Userlevel 1

I have two questions:

  1. Why does it take logner for en Execution-job to run when trigged manually as compare to when it runs on schedule?
    1. I have an execution job which runs in 8 mins on schedual. When i run it manually it takes 16-25 minutes to run.
  2. I have a couple of supernatural keys in my keystore in DSA. There has been a change in a table that creates one of these keys. The question is: When an execution package is run, does it also updates the supernatural key? 
    1.  Lets say my supernatural key is based on ACCOUNT_NR and CLIENT_ID. Someone in the accounts deparment changes the ACCOUNT_NR column. What is the recocmended method to update my supernatural key?
    2. If i reset a supernatural keys, is it enough to just deply that key and then execute and deploy the tables that key is used in, or do i have to deploy the whole keystore?

11 replies

Userlevel 6
Badge +7

Hi @aftabp ,

  1. “it depends”:
    1. are the performance metrics for scheduled and manual obtained from runs around the same time? I.e. your sources may have more outgoing bandwidth / resources at night compared to during the day if you are comparing a nightly scheduled run vs. a manual run during the day
    2. do you have the TimeXtender application open for both scheduled and manual execution? The application can take quite some RAM for larger projects, it may be that that is slowing down some part of the execution.
    3. I am not sure if turning off logging on an Execution package will be honoured by the Default Execution package if you right-click Execute. So check whether logging is off on the scheduled package, but the Default Execution package does have it turned on
    4. are there any differences in wait statistics, query plans, resource use on VMs, etc?
  2.  
    1. Do you mean that they changed the account_nr for a specific client, or do you mean changing the data type or content for all keys? If the content is changed, a new key is generated (functionally representing the old item). You then need to decide whether your data questions require repointing existing records to the new key, or whether it is fine to have new records point to the new key and old to the old. The keystore is additive in the sense that any changes will result in new hashes which then are coupled to new int values. The cleanest way to deal with global type / key content changes is to have a new column in the source and generate a new surrogate key alongside the old. This gives you most flexibility.
    2. Resetting keys means resetting the keystore and fully running all attached tables (incrementals will otherwise not have there keys updated for older records): both tables reading and writing to the store. 

 

Userlevel 1

@rory.smith 

Here is issue at hand.

Take a look at the 2 tables below.

PKS_ACCOUNT is defined in KEY_STORE.

For each table PKS_ACCOUNT is a combination of KONTONUMMER and KLIENT.

Both KONTONUMMER and KLIENT is the same in both tables but PKS_ACCOUNT is different. 

Why?

How can i fix it?

 

 

Userlevel 6
Badge +7

Hi, @aftabp ,

 

could you check to see that the supernatural key definitions for all instances where it is used match exactly. This includes the order of the fields in the keys. Additonally the exact types of the fields can be important. It could also be there there is whitespace included in some of the fields and not in others.

It might be useful to have screenshots of all the supernatural key settings if none of the suggestions above lead to finding the difference.

Userlevel 1

@rory.smith 

This is have checked this with a simple SQL and the lengt is the same as number of characters

select 
kontonummer, len(kontonummer), klient, len(klient)
from myTable


The order is also the same.

I have a suspicion but i dont think it is the cause.

When we moved from multiple to single database, some of the tables in ODX became “undeployed”. I dont remember anything about DSA tough.

I found out about this when some of the execution jobs failed. I tried to view the data in ODX and got a pop-up that the table is not deployed. After i deployed the effected table(s) i could update the tables in all stages. 
Eventhough its just a fresh deploy and i didnt touch the key-store, i dont think that the issue has anything to do with what i am facing now.

Here is another interesting thing. 

All the tables that have issues with SNK have mapping with two or more tables in DSA. One of these mapped tables are form our old accounting system (which is static now). The SNK in all tables are from the same key store and made up of the same two columns; KONTONUMMER and KLIENT.

I was trouble shooting the missing accounting figures in the table in MDW. The exact accounting figures that are missing are from the old account system, the one with different SNK; eventhough the KLIENT and KONTONUMMER are the same in both tables. 

So my question is:

If someone do somethiong with accountnr., would that cause the issue i am facing now? Lets say that yesterday there was an issue with accounting and all the account numbers disappear. TX runs an executuion job before the account department manage to fix the issue. Would TX then generate new SNK? After the TX job is run, accounting fixes the issue and revert to old account nr., TX runs an execution  job, would i get back the same SNK as before?

Userlevel 1

Any suggestions about what i can do about the mismatched SNK?

When i look back there are couple of things that happened, that might have caused this issue:

  • Conversion from multiple to single database before moving to AZURE
    • This i can rule out because our on-prem TX ran on single DB for about a month before migration to AZURE. If there was an issue with SNK, someone would have found out deviations in BI reports and reported it back to me
    • However, the issue might be there after conversion to single DB, but no one saw it until now
  • Everything was fine but something happened during transfering of tables from on-prem to AZURE

The main issue is, how do it fix it?

Should i register a case?

 

Userlevel 6
Badge +7

Hi,

there are a few things that go into determining the value of a SNK in a fact or dimension table:

  • content and order of all the key fields (a hash is made of all the fields in the order specified in the SNK. Hash(A, B, C) will not match Hash(A, C, B))
  • hashing method chosen (the default 2005 hash method will result in different results from the 2016 (fastest) one). This setting can be applied at different levels and may therefore cause different results from the same input
  • collation settings influencing the content of string fields (hashes compare the bytevalues, different collations will have different byte values for the same string)
  • load order (while the hash of a key may be the same, TimeXtender adds records to the keystore in load order. If you migrate things and start from an empty keystore, you may find records are loaded in a different order and therefore get a different SNK. The keystore basically adds a DW_Id to each hashcode loaded into it. The DW_IDd of the keystore is what appears in your tables)

The keystores are generally additive. they are only truncated at user command. If you are merging databases together, this may result in issues if the settings around the SNK are not exactly identical. I.e. if one database has the 2005 hashing function and another 2016+ you will need to do extra work to ensure referential integrity across the board. As to what happened in your case: that would involve investigating at least all the above points to try and work out where something went wrong.

Userlevel 1

@rory.smith 

I have checked and the key store is using the default method. 

The keystore was not empty. 
During the migration i imported the REPOSITRY and DWH from our on-prem environment to AZURE.
I monitored the execution jobs and most of them ran successfully. The execution jobs that failed were because some of the ODX and DSA tables were not deployed properly. These tables were then deployed again. The execution jobs have worked fined ever since. 

Here is question though. I have reset the key-store in DEV in an attempt to correct the issue that i am facing. Now i need to transfer some code from DEV to PROD. Would the keystore also be transferred? I dont want to do that.

  

Userlevel 6
Badge +5

Hi @aftabp were you able to resolve this issue? Were any changes made to the keystore settings in TX? If so these changes will be transferred, and will take effect once you deploy the keystore object in QA/PROD

Userlevel 1

@Christian Hauggaard

The issue is still unresolved.

There were no changes made to key-store.

I did however re-deployed key-store again in DEV in an attempt to resolve the issue in DEV but that didn’t help either. I have of course deployed and executed effected tables again

 

Userlevel 6
Badge +5

@aftabp can you please confirm which version of TX are you using?

Can you please share some screenshots of the tables in TX as well as both supernatural key settings and the key store settings?

Please select show data types under View, so that the data types are visible next to the field names

 

Badge +1

Usually when the SNK’s turn up different for the same value it has to do with either the order of the business keys in the SNK setup or the datatypes of the business keys.
If a nvarchar field is configured with 100 or more characters or a varchar field is configured with 200 or more characters it will use different hashing method compared to < 100 (nvarchar) and < 200 (varchar) and the SNK’s will be different.

Reply