Skip to main content

I am working in a project and the execution is to mach slow, 

 

It has take 35 minutes for 5 tables, that all of them has 10000 rows. 

 

 

 

Hi @ignacio 

Which version of TX are you using?

Are you using ODX or business units?

Are you seeing performance issues on the transfer from ODX to DSA or data cleansing in the DSA?

What storage are you using for ODX/BU and DSA? e.g. SQL Server, Azure SQL db, data lake, etc.


Hi @Christian Hauggaard,

We work with  version 20.10.36.64

We use ODX, the  transfer from ODX to DSA  is slower in some ocations  return the following error

The timeout period elapsed prior to completion of the operation or the server is not responding.

 


Hi @ignacio 

Which ODX storage are you using? SQL Server, Azure db or data lake?

Please also confirm which DSA storage are you using? i.e. on-prem SQL Server or Azure sql db?

Can you please try increasing the connection and command timeouts for your ODX and for your DSA?

Please also try to reduce the number of threads to 2 in the execution package, and deploy (i.e. by right-clicking on the project and selecting “Save project as deployed version” under advanced)

 

Please also monitor the CPU and memory on the app server during execution to see if there are any bottlenecks


@Christian Hauggaard 

about your question 

Which ODX storage are you using? SQL Server

which DSA storage are you using? on-prem SQL Serve

I changed the time out according you advisor. 

I dont understand why Query Tool working. I am frustrated


Hi @ignacio I have created a support ticket for this


Dear @ignacio ,

I believe I've had the same issue before. The thing is that you;ve put the max rows to copy (or the batch size) to 0, which means all the rows.

in this case TX will try to take ALL the rows from the ODX and transfer them in 1 batch. hich is useally huge and will most of the time will fail and result in timeouts (because it takes to long). Also the machine you are working on might run outt of RAM really easily and quickly.

Maybe if you change the max rows to copy to 300000 (default TX setting) this will fix your problem?

 

Hope this helps

= Daniel


​ Hi @daniel , I made the change you mentioned and the process takes 1 hour and 37 minutes. I stopped the scheduler service to only run my execution

 

 


Dear @ignacio ,

can you screenshot the gantt chart? Im wondering where the execution is so slow. 


Yes @daniel 

 


Hi,

I don't see any mention of the scaling of resources. Perhaps it is useful to know:

  • are you running everything on one virtual machine? ODX Server, TimeXtender, SQL Server?
  • how many cores and how much memory does your TX VM have?
  • how many cores and how much ram does your SQL Server have?
  • is your SQL Server instance only dedicated to TimeXtender or is it also hosting other applications’ databases?

It looks like there is no ODX Server queuing issue, so I would focus on whether your TX VM or your database (or both) are the bottleneck. Take a look at Resource Monitor on your VM to ensure there is nu CPU, Memory or disk system being exhausted. On your SQL Server, check the wait times to see what is losing you most time there.

If your ODX Server and TimeXtender and database server are on different machines, there could also be a networking bandwidth involved.


Dear @ignacio ,

I’m really interested what your answers are for the questions that @rory.smith raised on the specs of the servers and architecture. 
The gantt chart tells me that the issue is in the data transfer and not in the data cleansing (lookups and transformations)  

I’ve also seen this where the sql server is not been set to the correct amout of cores that the server has. An other issue could be the sql server block size is not correct

 


Hi team 

sorry for the delay in the response. I had to change the the CDA connector for the last version for My sql server 

 

with this change, the performance improved, and the client is satisfied with the execution times


Hello @ignacio ,

So you've upgraded the connector to it last version or did you change connectors completely?

What is the change in load times?

Thanks!

= Daniel


Hi @ignacio , do you have any implicit type casts (so a different data type in the DSA compared to the ODX)? This can also slow down things considerably in my experience, especially when you implicitly convert Float-datatypes. It's better to load them to the DSA with the same data type as the ODX, and then add a new column in the DSA with an explicit cast (I always use try_convert()).

You can check this by syncing the DSA, if it shows data type differences then sync the table.


@Daniel Alm, yes I change upgraded the connector to it last version and it working better 


Reply