Solved

Incorrect syntax near the keyword 'ORDER'

  • 18 January 2024
  • 12 replies
  • 189 views

Hello,

We’re getting the following error when executing a table in DSA.

ODX object synchronization detects no changes, and our ODX source is synched as well.

I’m guessing the ORDER is an order by… Any idea what causes it?

 

Thanks

icon

Best answer by Thomas Lind 18 January 2024, 15:03

View original

12 replies

Userlevel 4
Badge +5

Dear @ofirb ,

Is this a fysical table created from a view? If so, it might be so that you've put a ORDER BY in the view. This does not work when creating a fysical table from a view. When you take the ORDER BY out it will work.

Hope this helps

= Daniel

Dear @ofirb ,

Is this a fysical table created from a view? If so, it might be so that you've put a ORDER BY in the view. This does not work when creating a fysical table from a view. When you take the ORDER BY out it will work.

Hope this helps

= Daniel

Hi @daniel , thanks for the comment.

 

The source of this table is ODX, so I guess we’re looking at ORDER BYs somewhere in the ETL?

 

*Edit: The error occurs during ODX transfer.

Userlevel 4
Badge +5

Dear @ofirb ,

A table transfer between the ODX and DSA is never ordered by. 
Can you show me the mapping of the table and the complete table?

Is there a transformation with a windowed function in there? 
SUM([Field1]) OVER (PARTITION BY [Field2] ORDER BY [Field3]) ?

Is this an error in the transfer of the data (orange bar) or the data cleansing part?
otherwise you can see the code that TX created on the DSA side by right clicking on the table - Advanced - Customize code and take the Transformation view and click add.
 

Choose the standard editor and press ok
Now you can see the DSA view that TX has created. now check the code for inconsistancies or copy the code to SSMS and see if you can find the issue there. 

Please be careful with this option as a change in the code here will make this table completely custom and TX will not maintain this table (when names change or whatever) so whenever you check this always press ‘Cancel’ and make changes in the table like you normally would

Hope this helps

= Daniel

@daniel the error occurs during ODX transfer indeed. the execution fails before it gets to cleansing.

I see no inconsistencies in the transformation view.

In fact we saw this error on a different table a few days ago, and synchronizing ODX objects solved it then, but now our ODX says it detects no changes.

Userlevel 4
Badge +5

@ofirb , 

Did you sync between the ODX and the DSA? Or did you sync the ODX to Source?

Does the table needs to be deployed for whatever reason perhaps?

@daniel I synched both. The table is deployed.

The insert bulk is running, and after 2-3 minutes it fails with the ORDER error.

The R table is populated, but strangely with a few hundred records less than what I’m seeing in the parquet file (1.8 million records in total).

For the context, this specific table is newly added from an incremental ODX source, but is set to full load. So currently only 1 parquet file, because the ODX transfer task is a full load.

 

Userlevel 4
Badge +5

@ofirb 
I think we need the expertise of @Thomas Lind or @Christian Hauggaard for this. I'm not awayre or any order by or anything happening on the back end of TX.

The missing rows could be duplicate PK's? check the error table? Or maybe it was killed during the final load of the bulk insert and those rows where not added. 
Could it be possible that in the source there are quotes, double quotes or brackets (‘, “, [ ]) in the source data? Or maybe hard returns? I've had some issues with those before.

What is the data source in this case?

Userlevel 6
Badge +5

Hi @ofirb 

If it fails after applying data to the _R table it means that it fails when it begins the cleansing.

Is it a complete transfer from ODX to DWH without any rules applied or data type changes?

Did this table work before, but are now getting this error?

Also if your source uses incremental load, does it also have the check for updates?

If it only have one parquet file, try to set it to automatic instead of full load and see if it gets the same error then.

Hi @Thomas Lind 

It is a complete transfer from ODX to DWH with no rules or data type changes.

Source is SAP BW and it is a newly added table, so it never ran before. Check for updates is enabled, but ODX is doing a full transfer. I do get a warning saying that it will do a full load due to absent PK, but this is expected.

If I set the table to automatic, I have to define a PK and I don’t have one at the moment. This is why I’m trying to do a full load to DWH. I defined a dummy PK and set to automatic just to see, but still got the same error.

Userlevel 6
Badge +5

Hi @ofirb 

Normally tables gets automatic PK values set. If you use incremental load with updates that is a requirement.

If you have no PK values in the ODX it will not help to set them in the DW.

Try to exclude this table from being incrementally loaded in the ODX. Run the Sync task of the data source, then run the transfer task, but only for this table and then run it again in the DWH. Be sure the table in the DWH is without any PK values and set to run with no rules about incremental load.

Thanks a lot @Thomas Lind & @daniel 

I confirm that removing that table from the incremental with updates ODX rule has solved the issue. We had a generic rule that included that newly added table automatically, while we didn’t have a PK for it.

 

Userlevel 4
Badge +5

@ofirb 

Happy to help!

Reply