I have a rest interface loading via ODX in V20 into an Azure Data Lake.
Recently an infrastructure change to the server made the Rest Endpoint timeout (it has an IP whitelist and the server IP was removed).
This resulted in the Synchronization task determining that there were no tables available to load, even though there are multiple custom RSD files that have been generated and the CData Rest driver has been configured to Never generate Schema files.
The IP has been restored to the whitelist and the Rest interface is working again (after multiple days of the ODX loading nothing). Unfortunately the Meta Data for every table has been flagged as invalid. When I run the ODX load every table comes back with an error that the table has already been used by a table with a different GUID. And If I run the synchronize task on the ODX it deletes the mapping of every table connected to the particular ODX source (40+ tables).
I cannot even run a preview of the data from the Gen 2 storage through TimeXtender.
It seems that my only choice is to completely delete the entire source folder in the Gen2 Storage, run a complete ODX load to recreate parquet files with “valid” meta data and then manually remap all of the tables to the ODX and then promote up through each environment to get production running again!
Can someone verify if I have another option to correct this.
If I don’t have any other option, can someone please explain why and how this can happen. Why can’t we have a “Synchronize on Name“ feature with the ODX like the BU has?
This is not the first time I have had to do a similar thing during a development cycle. it seems the Meta Data the ODX manages is very fragile. Why can’t it sync with existing tables in the DataLake?
Paul.