Why does it take logner for en Execution-job to run when trigged manually as compare to when it runs on schedule?
I have an execution job which runs in 8 mins on schedual. When i run it manually it takes 16-25 minutes to run.
I have a couple of supernatural keys in my keystore in DSA. There has been a change in a table that creates one of these keys. The question is: When an execution package is run, does it also updates the supernatural key?
Lets say my supernatural key is based on ACCOUNT_NR and CLIENT_ID. Someone in the accounts deparment changes the ACCOUNT_NR column. What is the recocmended method to update my supernatural key?
If i reset a supernatural keys, is it enough to just deply that key and then execute and deploy the tables that key is used in, or do i have to deploy the whole keystore?
Page 1 / 2
Hi @aftabp ,
“it depends”:
are the performance metrics for scheduled and manual obtained from runs around the same time? I.e. your sources may have more outgoing bandwidth / resources at night compared to during the day if you are comparing a nightly scheduled run vs. a manual run during the day
do you have the TimeXtender application open for both scheduled and manual execution? The application can take quite some RAM for larger projects, it may be that that is slowing down some part of the execution.
I am not sure if turning off logging on an Execution package will be honoured by the Default Execution package if you right-click Execute. So check whether logging is off on the scheduled package, but the Default Execution package does have it turned on
are there any differences in wait statistics, query plans, resource use on VMs, etc?
Do you mean that they changed the account_nr for a specific client, or do you mean changing the data type or content for all keys? If the content is changed, a new key is generated (functionally representing the old item). You then need to decide whether your data questions require repointing existing records to the new key, or whether it is fine to have new records point to the new key and old to the old. The keystore is additive in the sense that any changes will result in new hashes which then are coupled to new int values. The cleanest way to deal with global type / key content changes is to have a new column in the source and generate a new surrogate key alongside the old. This gives you most flexibility.
Resetting keys means resetting the keystore and fully running all attached tables (incrementals will otherwise not have there keys updated for older records): both tables reading and writing to the store.
@rory.smith
Here is issue at hand.
Take a look at the 2 tables below.
PKS_ACCOUNT is defined in KEY_STORE.
For each table PKS_ACCOUNT is a combination of KONTONUMMER and KLIENT.
Both KONTONUMMER and KLIENT is the same in both tables but PKS_ACCOUNT is different.
Why?
How can i fix it?
Hi, @aftabp ,
could you check to see that the supernatural key definitions for all instances where it is used match exactly. This includes the order of the fields in the keys. Additonally the exact types of the fields can be important. It could also be there there is whitespace included in some of the fields and not in others.
It might be useful to have screenshots of all the supernatural key settings if none of the suggestions above lead to finding the difference.
@rory.smith
This is have checked this with a simple SQL and the lengt is the same as number of characters
select kontonummer, len(kontonummer), klient, len(klient) from myTable
The order is also the same.
I have a suspicion but i dont think it is the cause.
When we moved from multiple to single database, some of the tables in ODX became “undeployed”. I dont remember anything about DSA tough.
I found out about this when some of the execution jobs failed. I tried to view the data in ODX and got a pop-up that the table is not deployed. After i deployed the effected table(s) i could update the tables in all stages. Eventhough its just a fresh deploy and i didnt touch the key-store, i dont think that the issue has anything to do with what i am facing now.
Here is another interesting thing.
All the tables that have issues with SNK have mapping with two or more tables in DSA. One of these mapped tables are form our old accounting system (which is static now). The SNK in all tables are from the same key store and made up of the same two columns; KONTONUMMER and KLIENT.
I was trouble shooting the missing accounting figures in the table in MDW. The exact accounting figures that are missing are from the old account system, the one with different SNK; eventhough the KLIENT and KONTONUMMER are the same in both tables.
So my question is:
If someone do somethiong with accountnr., would that cause the issue i am facing now? Lets say that yesterday there was an issue with accounting and all the account numbers disappear. TX runs an executuion job before the account department manage to fix the issue. Would TX then generate new SNK? After the TX job is run, accounting fixes the issue and revert to old account nr., TX runs an execution job, would i get back the same SNK as before?
Any suggestions about what i can do about the mismatched SNK?
When i look back there are couple of things that happened, that might have caused this issue:
Conversion from multiple to single database before moving to AZURE
This i can rule out because our on-prem TX ran on single DB for about a month before migration to AZURE. If there was an issue with SNK, someone would have found out deviations in BI reports and reported it back to me
However, the issue might be there after conversion to single DB, but no one saw it until now
Everything was fine but something happened during transfering of tables from on-prem to AZURE
The main issue is, how do it fix it?
Should i register a case?
Hi,
there are a few things that go into determining the value of a SNK in a fact or dimension table:
content and order of all the key fields (a hash is made of all the fields in the order specified in the SNK. Hash(A, B, C) will not match Hash(A, C, B))
hashing method chosen (the default 2005 hash method will result in different results from the 2016 (fastest) one). This setting can be applied at different levels and may therefore cause different results from the same input
collation settings influencing the content of string fields (hashes compare the bytevalues, different collations will have different byte values for the same string)
load order (while the hash of a key may be the same, TimeXtender adds records to the keystore in load order. If you migrate things and start from an empty keystore, you may find records are loaded in a different order and therefore get a different SNK. The keystore basically adds a DW_Id to each hashcode loaded into it. The DW_IDd of the keystore is what appears in your tables)
The keystores are generally additive. they are only truncated at user command. If you are merging databases together, this may result in issues if the settings around the SNK are not exactly identical. I.e. if one database has the 2005 hashing function and another 2016+ you will need to do extra work to ensure referential integrity across the board. As to what happened in your case: that would involve investigating at least all the above points to try and work out where something went wrong.
@rory.smith
I have checked and the key store is using the default method.
The keystore was not empty. During the migration i imported the REPOSITRY and DWH from our on-prem environment to AZURE. I monitored the execution jobs and most of them ran successfully. The execution jobs that failed were because some of the ODX and DSA tables were not deployed properly. These tables were then deployed again. The execution jobs have worked fined ever since.
Here is question though. I have reset the key-store in DEV in an attempt to correct the issue that i am facing. Now i need to transfer some code from DEV to PROD. Would the keystore also be transferred? I dont want to do that.
Hi @aftabp were you able to resolve this issue? Were any changes made to the keystore settings in TX? If so these changes will be transferred, and will take effect once you deploy the keystore object in QA/PROD
@Christian Hauggaard
The issue is still unresolved.
There were no changes made to key-store.
I did however re-deployed key-store again in DEV in an attempt to resolve the issue in DEV but that didn’t help either. I have of course deployed and executed effected tables again
@aftabp can you please confirm which version of TX are you using?
Can you please share some screenshots of the tables in TX as well as both supernatural key settings and the key store settings?
Please select show data types under View, so that the data types are visible next to the field names
Usually when the SNK’s turn up different for the same value it has to do with either the order of the business keys in the SNK setup or the datatypes of the business keys. If a nvarchar field is configured with 100 or more characters or a varchar field is configured with 200 or more characters it will use different hashing method compared to < 100 (nvarchar) and < 200 (varchar) and the SNK’s will be different.
Hi @aftabp can you please provide an update on the questions above? Thanks
@Christian Hauggaard Sorry for late reply. I remember that i promised you screen shots 19 days ago. I was on vaction without my laptop Here are the updates. TimeXtender version is: 20.10.45.64 SNK are defined in DSA
One of the SNK is PKS_Account which is set up like this in Key Store
This is how the key looks like in ACCOUNT TABLE
This is how the key looks like in CUSTOMER TABLE
As you can see, both the SNK are made up of Kontonummer(varchar(255)) and Klient(nvarchar(255)). When i do a select on kontonummer (account nr) on 00180 on both tables i get two differnent SNK
@andri Please have a look at screen dumps in the above post. The SNK (PKS_ACCOUNT) is from the keystore and made up of Kontonummer and Klient.
The order of the keys in both tables (Account table and Customer Table) are the same.
Both Kontonummer and Klient are nvarchar with length og 255
Make sure there are no hidden characters in any of the values. If there are none, try using a different hashing algorithm for the key store, like the SHA-1 SQL Server 2016+, Case insensitive. I personally would also adjust the size of the datatypes for fields that are used as business keys to be closer to max potential length of values it stores.
Are both tables writing to the keystore? Could it be a race condition of some sort where both tables are inserting identical records at the same time? Same collation?
@RLB How do i fint this out?
You could try executing one table at the time and cheking to see if the second SNK is added or not. This requires a reset of the keystore.
But first check if there are hidden characters in the strings that make them distinct like @andri said. Paste the values in a text editor like Notepad++ or try manually hashing the values in SQL Management Studio to see if they differ.
What happens if you set one of the tables to only read from the keystore? Does it find the matchig SNK then?
Hi @aftabp ,
if you rightclick all Supernatural Key fields for a given key store you should (normally) only want to have one table writing to the key. The ‘master’ table is usually the dimension table and fact tables usually only read from the store, therefore having the ‘Only read from the store’ checkbox checked. If you are handling early-arriving-facts you may have a different configuration, but I would personally handle that differently.
If you are using TimeXtender's regular execution management, it should handle the dependencies and avoid any situation where two tables are writing to the store at the same time.
@rory.smith I have right-cliked a SNK in the key-store. What am i suppose to look at?
This is what i get when i click on Edit Key Store
@aftabp I believe Rory is referring to the following setting which you can see by editing the supernatural key field
@rory.smith
How do i check if our TX is using «TimeXtender's regular execution management»
@Christian Hauggaard
I dont think checking read only will help because both tables are FACT tables
Hi @aftabp ,
if your execution packages use ExecutionTime under Managed Execution and you are not explicitly setting other packages under the ‘Concurrent packages’ tab under ‘Edit Execution Package’, you are letting TimeXtender manage dependencies for you. If you have Managed Execution set to ‘disabled’ I am not sure that that prevents two tables from mutating the KeyStore at the same time.
Your Kontonummer and Klient fields together uniquely define an entity in your data platform, typically you would fill the Supernatural key from a dimensional table that (hopefully) contains unique records for that entity. All Fact tables that use that key then get set to ‘Only read from store’. This means you fill the keystore from a relatively small dimensional table and only read from large tables. I usually add a fallback transformation to set the Supernatural key field to -1 if empty in the Fact table and add a dummy record to the Dimension table to ensure referential integrity.
This setup also makes it easy to check for errors: there should never be much more records in the keystore than in the dimension table (there may be more if you do not guard your dimension against deletion in source). You can scan your fact table supernatural key fields for -1 and determine why no proper key is found.
Hi @aftabp , The dimension should always be the ‘master table’. Also from a MDM perspective you make sure that you're dimension tables are correct so you're ERP uses the right products / customers etc. while working in the system.
You might want not want to use the SNK in the Fact table as this might be very slow in the data cleansing. especially on big fact tables. What I do is make a SNK in the dimension and then do a look-up to the fact table on the natural keys. And then, like Rory sugested, always have a SQL snipped (ISNULL(NField],-1) to check that there is always a key to the dimension table. Don't forget to put in a dummy record with a key -1 in the dimension table in the MDW (so data warehouse layer, or a layer after the transofrmation layer where the SNK is not made because otherwise the -1 goes into the key store aswel and then the -1's from the fact tables will not link up correctly).
Hope this helps = Daniel
@daniel@rory.smith and @Christian Hauggaard Thank you for all your help and suggestions. The issue seems to have been resolved now. It took some time and, trying and failing. What worked at the end was:
Truncating the effected tables
Checking “Only Read from store” on Fact tables
Deploying the associated key store
Deploying and Executing the concerned tables
Now the next question is: Should i check “Only Read from store” for the other keys also in the Fact tables?
So far, i have not encountered any issues of mismatched keys with other tables.