We're observing strange behavior with the new enhanced REST data source provider.
When we do the call on this endpoint in the old connector or in Postman, we get a return of 197 records. This is a single payload, 1 page, with fully filled rows, including a populated PK field.
Now, when we do the same call with the enhanced REST data source provider we receive 589 records. 392 of these have null values on all records except for one field, a descriptive but not otherwise important. Importantly the PK field of these records is also null.
Now we have two questions:
Why is the new REST provider delivering more records than other providers/tools?
How can we effectively filter out these records in the actual call? As it seems we are now only able to successfully filter out these almost empty records at the DSA.
Best,
Luuk
Page 1 / 1
Hi @luuk.bouman
If you turn on debug logging, what sort of endpoint call does it do and is it the same as what is sent in the console log of Postman?
If you use dynamic values, try to set fixed values to match Postman as much as possible.
Is the result of the call the same as Postman if you remove all the rows with no PK values?
Hi Thomas,
We do a quite simple GET call on an endpoint. It is the same call as in Postman.
It's straight forward, no pagination, no dynamic values.
The result of the call is indeed the same as Postman if we remove all the rows with no PK values.
Hi @luuk.bouman
I thought about this feature, do you apply that?
We do, it is turned on. I just tested whether it makes any difference when we turn it off, but it does not.
Hi @luuk.bouman
If you turn on caching to a file, you will get the result of the call in there. Can you try this and use this result as a file and see if it contains 589 rows or 197.
If it contains the same amount of rows as Postman, it would be some other part than the setup of the endpoint that removes the rows.
Hi Thomas,
Thank you for pointing me in that direction. I did research and my observation is that there is a nested JSON array that is causing the extra records.
I sandboxed the situation with limited data and fields. This is the JSON payload I extracted with Postman
So we get 2 extra records for the two values in CaptionList that remain after putting the 1st value in the field on the original record.
This creates two problems. The extra almost empty records as mentioned and the content of the field CaptionList is malformed. This is because all the available CaptionList values are handed out to the data records in sequential order.
Say we originally have data 2 records. Each has 3 CaptionList values for a total of 6. In the transformed data, record 1 will get value 1 from the Caption List field. Record 2 then gets value 2 from the CaptionList field. This is wrong however, since that value belongs to record 1. At the end the remaining 4 CaptionList values are added as extra rows.
Spoke to Rory as well and I can see that it would be difficult to incorporate another node of XPath logic to correctly capture the values in the CaptionList array. I do think that quite a few APIs out in the field are structured like this. I encountered two already in the last couple of months.
Do you think the DEV team would be able to implement a solution for this problem in the Table builder?
Hilsner,
Luuk
Hi @luuk.bouman
I would need a file with the json setup to pass on if you want the dev team to look at it.
See if you know of a way to send it to me.
I am not sure it will help with this, but I recently made a change of the Fixer IO guide to show how you can change the order of how data is read.
Hi @luuk.bouman do you have an update on the above? thanks
Hi Thomas, Christian,
@Christian Hauggaard, I didn't get around testing it the past week, but managed to make some time for it just now.
@Thomas Lind to test I just use the JSON I posted earlier, run it through the table builder wizard to get XSLT as mentioned and set up a TX Enhanced connector type JSON with these two ingredients.
I've read through your Fixer IO guide and it seemed the use case was overlapping with mine. As I am not really fluent in XSLT I flew in some AI help. After a lot of prompting I finally got a XSLT script that does the trick. Interestingly enough the final working code is quite a bit more compact than the output of the table builder. Tested it on the actual production API of the customer as well and that works fine. End result does not have extra rows anymore, and for this use case the CaptionList fields contains all values from the array concatenated into one string.
Apart from handling the CaptionList array, this XSLT is roughly 160 rows shorter than what the XSLT generated by the table builder wizard with the data structure option set to 'Column’, but seems to deliver the same output. Now I haven't tested it on other data or different structures, so I don't know if this will always fly.
What are your thoughts on this?
Hi @luuk.bouman
I have done the same for my Fixor.IO guide. I also used AI to figure out how to do it and it made a similar setup. If it works I am OK with it.
Hi @luuk.bouman does the comment above answer your question? If so please help us by marking a best answer above. If you have any follow up questions please let us know. Thanks
Hi Thomas, Christian,
Since the setup is now working with the above XSLT, we have a working solution. I am still interested in whether it would be an option to incorporate handling a JSON array in the table builder wizard. Is this something the dev team would consider looking into?
Thanks for confirming @luuk.bouman
Can you please submit a product idea for handling a JSON array in the table builder? The product team reviews the product ideas on an ongoing basis