Skip to main content

Hi

Just upgraded existing TX ingest and Data Integration to latest versions 6848.1 on dev server and now I get a failure when running my ODX Data Source steps, specifically it seems just what used to be know as the synchronise step but now called the import metadata task. Originally each data source has 3 steps, synchronise, transfer and storage management, and they used to run fine. Now I get seemingly random job fail errors on what looks like so far just on what was the synchronise step.

If I run the failed step manually it processes without issue. However, when I run the entire ODX job I have set up it seems that one or more data source sync steps now fails.

 

The execution failed with error:
Exception Type: Microsoft.Data.SqlClient.SqlException
Message: Transaction (Process ID 76) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
             Stack Trace: at Microsoft.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
                          at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, SqlCommand command, Boolean callerHasConnectionLock, Boolean asyncClose)
                          at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
                          at Microsoft.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.RunParser(BulkCopySimpleResultSet bulkCopyHandler)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.CopyBatchesAsyncContinuedOnSuccess(BulkCopySimpleResultSet internalResults, String updateBulkCommandText, CancellationToken cts, TaskCompletionSource`1 source)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.CopyBatchesAsyncContinued(BulkCopySimpleResultSet internalResults, String updateBulkCommandText, CancellationToken cts, TaskCompletionSource`1 source)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.CopyBatchesAsync(BulkCopySimpleResultSet internalResults, String updateBulkCommandText, CancellationToken cts, TaskCompletionSource`1 source)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.WriteToServerInternalRestContinuedAsync(BulkCopySimpleResultSet internalResults, CancellationToken cts, TaskCompletionSource`1 source)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.WriteToServerInternalRestAsync(CancellationToken cts, TaskCompletionSource`1 source)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.WriteToServerInternalAsync(CancellationToken ctoken)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServerAsync(Int32 columnCount, CancellationToken ctoken)
                          at Microsoft.Data.SqlClient.SqlBulkCopy.WriteToServer(DataTable table, DataRowState rowState)
                          at DataStorageEngine.SQL.SQLStorageEngine.<>c__DisplayClass69_0.<ImportDataSourceMetaData>b__0(IDbCommand command, IDbTransaction transaction)
                          at DataStorageEngine.SQL.SQLStorageExtensions.ExecuteCommand(IDbConnection connection, Int32 commandTimeout, Action`2 action, IsolationLevel isolationLevel)
                          at DataStorageEngine.SQL.SQLStorageEngine.ImportDataSourceMetaData(DataSourceModel dataSourceModel, List`1 tableModels)
                          at ExecutionEngine.Action.ExecutionAction.<.ctor>b__11_0()
 

Any advice?

Hi Paul,

are you scheduling the synchronise steps? You may actually not want to do that as you might require human decisions here. The metadata process is really two parts: extracting schema from the source and applying it to the structure of Ingest storage. For the error I would push a support ticket I think.


Hi Rory

I am scheduling the synchronise steps as part of the overnight data ingest, as that is what I have always done. Is that not best practice, i.e. should the sync steps only ever be run manually?

I should add that I have manually run all the individual data source metadata sync steps first without any issue.


Reply