Skip to main content

I want my transfer to MDW job to run only when my ODX transfer task has completed without any errors. What happens sometimes now is that for some reason there is an error in the extraction and some tables are empty, they then get pushed to the MDW and our reports break because the tables are empty. I see that you can use instance variables but I don’t  see that option on my ODX. How can I set this up?

Hi @Niels Jonker are you using SQL Server ODX storage?


@Christian Hauggaard I’m using Data Lake storage for the ODX


@Niels Jonker Thanks for confirming. Can you please share the error message you are getting in your transfer tasks?


@Christian Hauggaard I’m not getting any error messages. I’m just wondering how to set it up so that the transfer of data from the ODX only executes when the transfer task was completed successfully without any errors. I hope it is possible and otherwise I hope that someone can explain to me how they deal with this? If it’s not possible I’ll add a feature request.


@Niels Jonker 

If you add it as an idea, I’ll upvote it :-)

(I’m happy to add it if you want)


Hi @Niels Jonker Which version of TimeXtender are you running?

The reason I ask about error messages is because I would like to investigate if there is a particular error that results in empty tables, as I have not seen this happening on ODX Data Lake storage before. I just tested with a SQL server data source, and I removed a column in my source that I explicitly included in the table/column selection for the ODX data source, which resulted in an error when I transfer. No new parquet file was created in my ADLS, and therefore when I executed the table in my MDW the same data was present as before (i.e. records were not deleted). So in this test case, despite errors in transfer task, it did not lead to empty tables, which means there is no need to create a “rule” to only transfer data from the ODX when the transfer task was completed successfully without any errors.

In regards to “What happens sometimes now is that for some reason there is an error in the extraction and some tables are empty” - which data source are you using and can you please send the error messages for those failed extractions?


@Christian Hauggaard Ah I see what you mean. I think the complexity comes from the fact that we are using Theobald here. The problem was that the Theobald extraction created log files that filled the hard drive on the VM and when there was no more room the extractions failed. However, I think when the extractions fail in Theobald, TX does not know that the output to the json endpoints from Theobald is invalid and grabs that data and loads it to the data lake. That also explains why some tables were filled and other were not, because those extractions were successfully completed before the harddrive was full. 


@Niels Jonker Are you using Theobald Xtract Universal with the REST provider? Perhaps it is possible to turn off or reduce logging in Theobald Xtract Universal. Alternatively, can you increase the size of the hard drive on the VM, to avoid running into the same issue?

 


@Christian Hauggaard Yes I am. I already decreased the log retention so that issue should be fixed. So I currently don’t have any issue with this but I was wondering if there was a way to set it up. I do think it would be usefull to be able to create conditions based on the ODX status for MDW jobs so I will create a feature request for that. Thanks for the help


@Søren Sørensen I will add the feature request so you can upvote it :)


Reply