Hi @KajEmergo
What version of TX are you currently on?
What sort of data source is it?
Normally this is due to the data source seeing this table as having disappeared and a new table with exactly the same fields and name is now attempting to use the same folder.
The only way to resolve it is to delete the table level folder in the data lake. Then on the next execution a new table with this name is generated, but it contains a different id in the _model.json file.
Hi @Thomas Lind,
We are on 20.10.40 and the data source is an OData source from CData.
So basically what you are saying is that the table might have been removed and then added again. Could it also be caused like this:
- On data source level we have selected a number of tables
- These tables are used in synchronize task
- Deselect a table under 1) and run synchronize task again
We have indeed used your solution multiple times before, but the problem keeps on reappearing and we want to get rid of this once and for all.
Best,
Kaj
Hi @KajEmergo
This generally happens when you synchronize, so normally my suggestion would be to remove the Sync task from any scheduled execution.
Instead you run the synchronization task when you know there are changes in the source you want to apply. Part of this is also that whenever there is a change, you can’t avoid doing a manual synchronize task for the Data Warehouse anyway.
This issue is common for any file based CData data sources (CSV, JSON, XML, Excel and similar) where you could have the synchronize task not find the table and then on a future run find it again.
There really is no other way to avoid it than to not run the synchronize task often.
I don’t know if the CData specifically is to blame, but I don’t really see it for any other data source types.
If you want to make a case for it. I would suggest that you try to structure some sort of test to prove the issue.
So recreate the existing data source and choose some tables. Set it to run the sync task multiple times a day and schedule the transfer task to start in between that. See if you can catch the warning happen and create a support ticket with it.
This way of working sounds indeed fine to avoid the issue. I agree that it makes sense to not schedule the synchronize tasks, this used to be manual work when using business unit as ODX too.
As long as this prevents the issue with the errors we are okay with it and will probably not make a case for it. Nevertheless I agree with the CData thing, this rarely happens on an ADF SQL source for example.
Best,
Kaj
Hi,
We have run into exact the same error. However based on the above information I don't understand how we can solve this issue. Could you please provide more information/ steps on how to fix this issue?
Thanks in advance!
Kind Regards,
Devin
Hi @devin.tiemens ,
the way to get rid of the warnings is to delete the source data in the ODX storage. The way to prevent the warnings is not to automate synchronisation so you do not synch when there is no data.
Thanks @rory.smith this indeed solved the problem!