Skip to main content

When you set up primary key behaviour to 'error' you elegantly force all valid data to be unique combinations of your selected primary keys. But, how do you control what individual dataset is regarded as valid and what is discarded as error ?


Is there a way to let the primary key violation error handling know, which record you consider the valid one ?  (e.g. based on min/max values in non-primary key data fields)

I do not think there is a specific way to control the order. It is usually what row comes first that is allowed to stay and the following ones are sent to error. Also you can't control what order rows are added in.


If you map more than one table, changing the order of them, may make an difference in which one is first, but I am not sure.

In general it would be better to figure out a way to make each row unique, or find a data selection rule that can remove the ones you do not want.

 

Another solution could be to deactivate the primary key violation error handling on the specific table and instead create a number of transformation fields on that specific table to identify the valid record. This transformation would then compute to either 1 or 0. And then finally set a field validation rule to look for 1 on that field.


Reply