Skip to main content

Hi, 

I'd love to get your thoughts on issue we're dealing in the past few days. We've noticed that transfer time from ODX to DSA is growing rapidly. This as significant impact on total running time of DWH and SLA towards our client

Execution log of DSA 

As you can see. as insert and cleansing tasks remains pretty much the same, transfer time is growing.

In terms of volume, data hasn't grown significantly  

 

Batch Id 13428 (first) - 46347863 rows

Batch Id 13481 (last)  46536187 rows (+0.4%)

difference in transfer time is more apparent as you zoom in into the package execution (Gantt chart)

Batch Id 13428 

 vs last batch (13481)

There is no inconsistencies in ODX que or ODX service logs found. 

I've also checked it there is long running transactions (sp_who2) in both TX and DWH server but could find anything out of the ordinary   

Our setup consists of ODX repository and DWH storage on same Azure Managed Instance. TX application server is (still) on-prem 

My burning question is: what can attribute the long running transfer tasks and what can we do to solve it? 

With kind regards, 

Dror

1. Which version of ODX are you using?

2. Are there other ODX tasks or executions running?

3. Review the backlog size

Use this guide to connect to the backlog (copy) using SQLiteStudio

How to Query ODX local Backlog SQLite database

Run the following SELECT queries (1 statement at a time)

SELECT COUNT(*) FROM [ExecutionLogs]
SELECT COUNT(*) FROM [ExecutionTasks]
SELECT COUNT(*) FROM [ServiceLogs]


Hi Syed, 

Thanks for looking at it. 

1. ODX version: 20.10.26

2. DWH execution is typically running after (sequentially) ODX tasks. so no, there shouldn't be overlap. there is also no standing ODX task on ODX que

3.

SELECT COUNT(*) FROM MExecutionLogs] > 1670
SELECT COUNT(*) FROM MExecutionTasks] > 41355
SELECT COUNT(*) FROM MServiceLogs] > 150

 

Backlog size is 1.1MB

 

 


Hi Dror,

Can you double check backlog size ( 1.1 MB or 1.1 GB ?)

Can you test your scenario in the latest version?  There are a few performance improvements. 

Please review Release Notes for TimeXtender 20.10.x

Also review Troubleshooting concurrency issues in ODX data sources 


Hi Syed, 


I stand corrected. Database is not lite (light) anymore. 1.1 GB


Are there are Cleanup scripts we can use? 


I will plan an emergency upgrade as this issue is getting worse by the day    


With kind regards, 


Dror 


 


Your current version includes auto-cleanup of logs, to deletes logs older than 90 days.  However, based on the number of data sources and the frequency of task executions, it may still have a lot of logs.  You may try to delete logs manually. Also review Troubleshooting concurrency issues in ODX data sources 


Reply