Skip to main content

This problem arrived after the issue with SNK is resolved. I am not sure if the two cases are related in any way but that was the last change made in TX.


I am trying to figure out why data clensing is taking so much time. It ultimately ends in time out:

Error during Data Processing for Table: DSA.FACT_TABLE_A at: DSA
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Operation cancelled by user.
Warning: Null value is eliminated by an aggregate or other SET operation.
The wait operation timed out

Stored Procedure: eDSA].Ausp_FACTABLE_Clean]
Parameters:
@version = 2177

Line number: 0

Details:

SQL Server: 'TEST-SQL-SERVER'
SQL Error Number: -2

The total number of rows in ODX are 9 664 965. It shouldn’t take more than a few minutes to transfer and data clens !

 

@aftabp It can have different reasons, 

1.if are using a serverless azure database. It sometimes just drops the script and doesn't throw an error

2.Another one might be that your odx execution process are still running and the table is waiting for the transfer.

3.If you are using lookups on the table make sure you are not using the aggegration method none, since your table will not be indexed and if you have multiple lookups it might take a long time without the indexing. (preffered setup is maximum)

and you did write over the tables names, but in the mapping the orginal name is still visible ;)


 

@brett.parkin 

  1. Yes, the server is AZURE Serverless DB.
  2. This following job was not an execution job. I have manually triggered Execute on table in DSA. Since ODX table is Guarded (on execution and deployment), i am wondering what process is running on table in ODX? 
  3. Yes, i have look-ups. How do i find out if there are aggrigations. I have looked at Advanced properties for the table and there are not aggrigates 
  1. I have kept the tables names

  1. The problem might be with serverless. when you look at the performance is it doing something while runing the table? If the cpu and log are idle your script might be dropped and you should try using provisioned (prefferable when you have large datasets hyperscale)
  2. It's not doing anything with the odx expect copying the data from this table. The guarding just means that this table itself cannot be executed or deployed. But other tables can still read from it
  1. The operator should be anything but none in here:

     

 


Hi,

 

if you expand a Conditional lookup and expand the lookup fields, you can right-click those to see what aggregation Operator is assigned. Note that I would consider adding required indexes over using the Maximum operator depending on the circumstances.

You might want to investigate this "Warning: Null value is eliminated by an aggregate or other SET operation.” in your stacktrace. That is usually a sign that something is not going well in the SQL that is being run.


Jumping in on what rory is saying i did have a messages like this before if you did put an quotes on a native transformation for instace 
 

this looks like this in the option menu:
 

 

But it should be:

When searching for a string tx put's the quotes around the value automaticly 

 

Don't know if this is what's going wrong but it might be worth a look


@aftabp Did you already solve this problem? if you did would like to know the solution


Hi @aftabp please let us know if the issue is resolved, and if so please help us by marking a best answer above. If you have any follow up questions please let us know


@aftabp What was the solution in the end? Did you try all 3 the options provided?


Dear @Bernarddb 
I still have issues with data cleanising 

So i noticed that somehow Table_A in DSA was empty. Since Table_A in ODX is static i have set it to Guard. No issus there, right ?

There are 9,6 million rows in Table_A.

Total time is 5 min and 20 sec, but the job has been going on for almost 2 hours. Its still in DSA and MDW is yet to be run

 


@aftabp what type of transformations are you doing? Did you change all the lookup values to anything but none? If so you also need to deploy the related tables to get the indexing working. If you have other transformations do you use alot of nested case statements? What you can also do I manually index the lookup link fields in the table it's going to and the table it's coming from. 

 

But these are just things I can think of without seeing what you are exactly doing with your transformations. Or maybe by accident you did disable the truncate raw table before Transfer in the table settings?

If it isn't any of this we will need more information since there are alot of things that can be happening.


@Bernarddb 

  • Lookup values have always been MAXIMUM
  • There are no nested case-statements.
  • Truncate raw table before Transfer in the table settings is not disabled

 

 


@aftabp  Are you running on Azure? If so can you check if the db is high on log io or that your cpu is maxing out? Log would be the purple line. If this is capping out you are probably running not enough cores or you should switch to hyperscale. If you don't see alot of activity you are probably running on serverles and you should consider using provisioned. If it's non of this we will need more information before we can help out. 


@Bernarddb 

I have just rerun the job and here are the stats from AZURE

Here is how it looks like in TX

 


Looks like the database is just going to sleep. If it doesn't spike back up. Is the db serverles?


DB is not sleeping. You can see the spikes after 2 PM

Yes, its serverless 

 


When servelles is spinning up more CPU's it sometimes drops the sql statement it was running. I don't know why. But put it on provisioned for a day and then rerun the table. It might be the fix


Reply