Skip to main content
Solved

Table does not load from ODX to DSA


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2

Hi,

I have a very regular table that is loaded from a SQL Datasource. The table has a little over 3 million records and the following definition:

 

 

Nothing out of the ordinary as you can see. The table is loaded with the ODX with an incremental selection rule on te mut_dt column. The ODX load works fine; loading the table full or incremental is no problem and the data is stored in the container as expected within 1 or 2 minutes.

However, when I drag the table to the DSA data area and load it there, something strange happens. Loading the table to the DSA hangs after 900.000 records are inserted into the raw table. We have tried waiting for several hours, but it just hangs there and nothing happens.
 

Other tables from the same source load fine to the DSA.

Data source version is TimeXtender SQL Source v 19.1.1.0 (but load from source to ODX works fine)
ODX and TX version is 6590.1 .

Best answer by wouter.goslinga

All, thank you for all the responses. I have finally managed to solve the problem, but not in a very convenient/easy way. As it turns out, one of the varchar columns was too small. I have enlarged this column in the DSA table from a varchar(50) to a varchar(100) and after that, everything runs smooth.

This is however a bit strange because in the ODX (and the source system table) the column is a varchar(50). So in order to fix the problem I had to unsync the datatype with the ODX which should not be necessary IMHO. I didn't have the time to figure out what specific column value actually caused the problem, I will do that later. Probably some weird/special characters in a column value.

I think if you load a table from ODX to DSA with all the same datatypes (so everything in sync) this should always work, it should not be necessary to change datatypes just to make the load happen. Or at least, it should break off with an error and not hang :). 

View original
Did this topic help you find an answer to your question?

26 replies

rvgfox
Problem Solver
Forum|alt.badge.img+4
  • Problem Solver
  • 222 replies
  • April 15, 2024

Be sure that the mut_dt column is defined as “Date and time” in TX.

Some times it detect the column as “Date and time (2)” and it doesn’t works.


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2
  • Author
  • TimeXtender Xpert
  • 52 replies
  • April 15, 2024

@rvgfox thx, but it's a datetime not a datetime(2) in source, ODX and DSA


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

@wouter.goslinga

are you able to transfer other tables from the same data source to DSA? If so are these also using incremental load on the mut_dt column?

Can you please try a full load and execute for the DSA table. Also please activate SQL profiler during the execution. While it is running, please check the sql db for deadlocks / processes that are hanging

Also please check memory and CPU on the App Server VM during the execution


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2
  • Author
  • TimeXtender Xpert
  • 52 replies
  • April 15, 2024

@Christian Hauggaard yes, any other table from the same source with or without incremental load works fine. I have tried removing the incremental load entirely, but i still keep seeing the little I on the mapping in the DSA table. 

I'm not entirely sure what you mean with SQL profiler but sp_who2 shows no locks. ODX Server VM is completely idle.


andri
Contributor
Forum|alt.badge.img+1
  • Contributor
  • 37 replies
  • April 16, 2024

We just noticed this exact behaviour this morning on an Azure Managed Instance database while loading data from ODX to DSA. The issue was that the database was hitting the maximum file size limit configured on the database files under Files in the database properties available through SSMS.
No error will be displayed and the execution will just continue until it times out.


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

@wouter.goslinga can you please confirm if you are also using Azure Managed instance? If so can you please try altering the max file size image as suggested by Andri above?


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2
  • Author
  • TimeXtender Xpert
  • 52 replies
  • April 17, 2024

@Christian Hauggaard Database is Azure SQL General purpose serverless, not a managed instance. Database properties in SSMS doesn't have a Files page…

 


rory.smith
TimeXtender Xpert
Forum|alt.badge.img+7
  • TimeXtender Xpert
  • 649 replies
  • April 17, 2024

@andri ,

you should be getting errors if you are exceeding storage limits. Are you also running into log waits?


andri
Contributor
Forum|alt.badge.img+1
  • Contributor
  • 37 replies
  • April 17, 2024

@rory.smith There were no errors, everything stopped on the database side and the TX execution just continued running, waiting for… nothing.
It was hitting the limit set on the database file but not the storage of the Azure SQL MI server.


rory.smith
TimeXtender Xpert
Forum|alt.badge.img+7
  • TimeXtender Xpert
  • 649 replies
  • April 17, 2024

Hi,

that's the thing - all of the documentation points to an error being thrown if you go over storage and there are no known issues for something similar. If you have a hard limit on data file size for a database in a regular SQL Server it will error out. Typically MI very closely follows regular SQL Server behaviour. I don't have any Managed Instance myself to check but I wonder if TimeXtender is swallowing an error or whether Azure SQL MI is not throwing one. In the latter case that should be reported to Microsoft I guess.


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

@wouter.goslinga SQL profiler shows what is being executed on the Azure db, please try enabling a trace by using the SQL Server profiler extension for Azure Data Studio while the execution is running  https://learn.microsoft.com/en-us/azure-data-studio/extensions/sql-server-profiler-extension

Are you using data on-demand?

Also please confirm what step is the execution getting stuck on, e.g. data transfer, data cleansing?


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2
  • Author
  • TimeXtender Xpert
  • 52 replies
  • April 25, 2024

@Christian Hauggaard yes, data on demand is activated but the ODX loads the table from the source system fine. The problem is in the load from ODX to DSA; it hangs when the records are inserted in the raw table so during the data transfer phase, not the cleansing.


daniel
TimeXtender Xpert
Forum|alt.badge.img+7
  • TimeXtender Xpert
  • 188 replies
  • April 25, 2024

Dear @wouter.goslinga ,
I've once had some issues with the transfer between the ODX and DSA (Lake and Azure SQL) when the bulk insert was to big. Maybe you can check those settings as well. When it tried to insert a certain amount of rows It would just break. Once there was a default setting of transfeering 300000 rows per bulk insert. If you set it to high or set it to 0 you might run into problems.

Hope this helps

= Daniel


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2
  • Author
  • TimeXtender Xpert
  • 52 replies
  • April 25, 2024

@daniel  thanks for the suggestion, the amount of rows inserted is still an option but you can set it in the portal for each datawarehouse-instance. In this case it is at the default 300K; the strange thing is that the first three batches are loaded in the raw-table correctly (there are exactly 900K records in it) and then it hangs.


rory.smith
TimeXtender Xpert
Forum|alt.badge.img+7
  • TimeXtender Xpert
  • 649 replies
  • April 25, 2024

Hi @wouter.goslinga ,

does it always hang on the fourth batch? It may be related to the data contents in that case if you are using ADF for the transfer as there are sometimes quite elaborate data type conversions going on. If you are not using ADF, I think running a SQL trace would be the best way to get more insight in what is going on.


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

Hi @wouter.goslinga were you able to run a SQL trace? can you please confirm if it always hangs on the fourth batch?


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

Hi @wouter.goslinga are you still experiencing this issue? Please let us know if you were able to run a SQL trace


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2
  • Author
  • TimeXtender Xpert
  • 52 replies
  • May 13, 2024

@Christian Hauggaard i'm unable to do the SQL trace, it keeps giving a strange error when I try to run the trace extension in Data Studio.


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

@wouter.goslinga which error message do you get when trying to run the trace?


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

Hi @wouter.goslinga can you please confirm the error message when running the trace?


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2

@Christian Hauggaard sorry I haven't been able to do work for this customer for quite a few weeks. The problem is probably related to some implicit typecasting between ODX and DSA but I still have to figure that out. Will let jou know when it's fixed!


daniel
TimeXtender Xpert
Forum|alt.badge.img+7
  • TimeXtender Xpert
  • 188 replies
  • May 29, 2024

Dear @wouter.goslinga ,

Maybe it is nothing but could it be that the SQL database ran out of storage?

i've encountered an issue like this before in my Azure SQL and then after en while the execution would fail but with a weird error. After some time i've found that the SQL ran out of storage and did not notify me. Hope this helps

= Daniel


wouter.goslinga
TimeXtender Xpert
Forum|alt.badge.img+2
  • Author
  • TimeXtender Xpert
  • 52 replies
  • June 11, 2024

@Christian Hauggaard so finally I'm able to do a trace. But it does not give a lot more information:

 

As you can see, it inserts a few batches but then it just stops. The only thing I can see in the trace is the line named ‘attention’ but it does not show any kind of error. At this point, the execution in TX still shows running but nothing is happening anymore.


rory.smith
TimeXtender Xpert
Forum|alt.badge.img+7
  • TimeXtender Xpert
  • 649 replies
  • June 12, 2024

The attention error points to a connection being cancelled, lost or timing out in a driver (among other possibilities). For some reason TimeXtender does not seem to pick up that the transaction has been ended on the SQL Server side, or SQL Server isn't nicely handling it.

As this is Azure SQL DB, the SQL engine should be state-of-the-art version-wise so there is nothing you can do there. It might be useful to see if you can isolate this down to problem records or whether there is an infrastructural (networking) issue at fault.


Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

Hi @wouter.goslinga 

Do you see any errors in the event viewer?

You are using Azure SQL General purpose serverless for the DSA DW instance storage correct? Are you using a SQL db or ADLS for your ODX storage? The source SQL db for the data source is this on a different SQL db than the ones used for instance storage?

What number of min vCores and max vCores are you using for the serverless azure db? Typically you should set the Min vCores and Max vCores to the same level. 

Can you please try creating a new execution package which includes the DSA table and enable retries on this execution package and see if this makes a difference?


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings