I have a very regular table that is loaded from a SQL Datasource. The table has a little over 3 million records and the following definition:
Nothing out of the ordinary as you can see. The table is loaded with the ODX with an incremental selection rule on te mut_dt column. The ODX load works fine; loading the table full or incremental is no problem and the data is stored in the container as expected within 1 or 2 minutes.
However, when I drag the table to the DSA data area and load it there, something strange happens. Loading the table to the DSA hangs after 900.000 records are inserted into the raw table. We have tried waiting for several hours, but it just hangs there and nothing happens.
Other tables from the same source load fine to the DSA.
Data source version is TimeXtender SQL Source v 19.1.1.0 (but load from source to ODX works fine) ODX and TX version is 6590.1 .
Page 1 / 2
Be sure that the mut_dt column is defined as “Date and time” in TX.
Some times it detect the column as “Date and time (2)” and it doesn’t works.
@rvgfox thx, but it's a datetime not a datetime(2) in source, ODX and DSA
@wouter.goslinga
are you able to transfer other tables from the same data source to DSA? If so are these also using incremental load on the mut_dt column?
Can you please try a full load and execute for the DSA table. Also please activate SQL profiler during the execution. While it is running, please check the sql db for deadlocks / processes that are hanging
Also please check memory and CPU on the App Server VM during the execution
@Christian Hauggaard yes, any other table from the same source with or without incremental load works fine. I have tried removing the incremental load entirely, but i still keep seeing the little I on the mapping in the DSA table.
I'm not entirely sure what you mean with SQL profiler but sp_who2 shows no locks. ODX Server VM is completely idle.
We just noticed this exact behaviour this morning on an Azure Managed Instance database while loading data from ODX to DSA. The issue was that the database was hitting the maximum file size limit configured on the database files under Files in the database properties available through SSMS. No error will be displayed and the execution will just continue until it times out.
@wouter.goslinga can you please confirm if you are also using Azure Managed instance? If so can you please try altering the max file size image as suggested by Andri above?
@Christian Hauggaard Database is Azure SQL General purpose serverless, not a managed instance. Database properties in SSMS doesn't have a Files page…
@andri ,
you should be getting errors if you are exceeding storage limits. Are you also running into log waits?
@rory.smith There were no errors, everything stopped on the database side and the TX execution just continued running, waiting for… nothing. It was hitting the limit set on the database file but not the storage of the Azure SQL MI server.
Hi,
that's the thing - all of the documentation points to an error being thrown if you go over storage and there are no known issues for something similar. If you have a hard limit on data file size for a database in a regular SQL Server it will error out. Typically MI very closely follows regular SQL Server behaviour. I don't have any Managed Instance myself to check but I wonder if TimeXtender is swallowing an error or whether Azure SQL MI is not throwing one. In the latter case that should be reported to Microsoft I guess.
Also please confirm what step is the execution getting stuck on, e.g. data transfer, data cleansing?
@Christian Hauggaard yes, data on demand is activated but the ODX loads the table from the source system fine. The problem is in the load from ODX to DSA; it hangs when the records are inserted in the raw table so during the data transfer phase, not the cleansing.
Dear @wouter.goslinga , I've once had some issues with the transfer between the ODX and DSA (Lake and Azure SQL) when the bulk insert was to big. Maybe you can check those settings as well. When it tried to insert a certain amount of rows It would just break. Once there was a default setting of transfeering 300000 rows per bulk insert. If you set it to high or set it to 0 you might run into problems.
Hope this helps
= Daniel
@daniel thanks for the suggestion, the amount of rows inserted is still an option but you can set it in the portal for each datawarehouse-instance. In this case it is at the default 300K; the strange thing is that the first three batches are loaded in the raw-table correctly (there are exactly 900K records in it) and then it hangs.
Hi @wouter.goslinga ,
does it always hang on the fourth batch? It may be related to the data contents in that case if you are using ADF for the transfer as there are sometimes quite elaborate data type conversions going on. If you are not using ADF, I think running a SQL trace would be the best way to get more insight in what is going on.
Hi @wouter.goslinga were you able to run a SQL trace? can you please confirm if it always hangs on the fourth batch?
Hi @wouter.goslinga are you still experiencing this issue? Please let us know if you were able to run a SQL trace
@Christian Hauggaard i'm unable to do the SQL trace, it keeps giving a strange error when I try to run the trace extension in Data Studio.
@wouter.goslinga which error message do you get when trying to run the trace?
Hi @wouter.goslinga can you please confirm the error message when running the trace?
@Christian Hauggaard sorry I haven't been able to do work for this customer for quite a few weeks. The problem is probably related to some implicit typecasting between ODX and DSA but I still have to figure that out. Will let jou know when it's fixed!
Dear @wouter.goslinga ,
Maybe it is nothing but could it be that the SQL database ran out of storage?
i've encountered an issue like this before in my Azure SQL and then after en while the execution would fail but with a weird error. After some time i've found that the SQL ran out of storage and did not notify me. Hope this helps
= Daniel
@Christian Hauggaard so finally I'm able to do a trace. But it does not give a lot more information:
As you can see, it inserts a few batches but then it just stops. The only thing I can see in the trace is the line named ‘attention’ but it does not show any kind of error. At this point, the execution in TX still shows running but nothing is happening anymore.
The attention error points to a connection being cancelled, lost or timing out in a driver (among other possibilities). For some reason TimeXtender does not seem to pick up that the transaction has been ended on the SQL Server side, or SQL Server isn't nicely handling it.
As this is Azure SQL DB, the SQL engine should be state-of-the-art version-wise so there is nothing you can do there. It might be useful to see if you can isolate this down to problem records or whether there is an infrastructural (networking) issue at fault.
Hi @wouter.goslinga
Do you see any errors in the event viewer?
You are using Azure SQL General purpose serverless for the DSA DW instance storage correct? Are you using a SQL db or ADLS for your ODX storage? The source SQL db for the data source is this on a different SQL db than the ones used for instance storage?
What number of min vCores and max vCores are you using for the serverless azure db? Typically you should set the Min vCores and Max vCores to the same level.
Can you please try creating a new execution package which includes the DSA table and enable retries on this execution package and see if this makes a difference?