Dear @Benny ,
What I would do is ‘ turn on’ the logging of the connector. you can Find the logging in the settings. Enter a folder and name for the logging. set verbosity to like 3 or 4 (maybe 5 but thats really verbose) and run the sync. This might help understanding the issue better or it easier to check stuff.
TX shows this error because the API needs an input in the filter section. Which, if not given, it will give you this error message. You;ve done nothing wrong, but this is just how API’s work.
i’m not really sure what you mean with the part about the exhaustive meta data scan. are you afraid TX does not scan the correct datatypes?
Hope this helps
= Daniel
Hi @Benny
How is the source endpoint with the dynamic value set up and are you sure it is foreign_ID and not foreign_Id or foreign_id that the name has in there.
Hi @Thomas Lind, the dynamic value is correctly cased.
The endpoint is set up like so:
- dynamic path like xxx/{foreign_ID}
- Table flattening with xslt
- Only list flattened tables
- HTTP method: GET
- Perform exhaustive meta data scan
- Dynamic values: enabled (pointing to correct endpoint)
- Override headers: Accept = application/xml
The problem seems to be in the determination of the datatypes. I suspect this, because I managed to create a quick-fix by changing the xslt. The column I suspected to cause problems was altered to <xsl:value-of select="concat('X--', $pathTU/@ArticleNumber)" />,
i.e. I forced an obvious string in front of every value. The sync then runs flawlessly and deduces an NVARCHAR(500) datatype for the column. When I remove the concatination, I get the error.
As I said, a lot of the ArticleNumbers are castable to INT, but not all. This seems to confirm my suspicion that the datatype is deduced based on a subset, in spite of my “exhaustive meta data scan” setting.
Hi @Benny
When you did the exhaustive meta data scan did the data types change for the fields in the source endpoint with the dynamic value?
@Thomas Lind, I tried once where it did change and once where it didn't. Both tries had the same error as result.
Hi @Benny
What did it change from and to and was it specifically the field foreign_ID? When it changed back, did you do anything to the setup to make this happen?
Have you tried to apply a datatype override to this specific field to make it a nvarchar(50) or similar and did that also make it fail?
The initial and intended setting is URL/{foreign_ID} (so foreign_ID is dynamic and comes from a different endpoint).
I looked up two specific foreign_ID's: one where ArticleNumber was castable to INT and one where it wasn't. Lets say I that foreign_ID = x has ArticleNumber = 800900 and foreign_ID = y has ArticleNumber = A12345.
I then changed the endpoint: once to URL/x and once to URL/y. Both times the sync ran fine. Both times also did a transfer and checked the datatype for ArticleNumber. The first time it was INT and the second time it was NVARCHAR(500). Changing the enpoint back to use a dynamic foreign_ID gave the error again. This suggests it decides that ArticleNumber is an INT and ArticleNumbers like A12345 then cause this error. I have no way to check this, beacause I don't have easy acces to the result of the sync.
I have not yet tried to override datatypes.
Hi @Benny
If you do get access to the data source in TimeXtender Data Integration you should try to make a rule to make that field a nvarchar, no matter what it thinks it is.
That said it also should give an issue if it thinks that the field is a int if you were to use the endpoint in a data area.
Hi @Benny did you manage to resolve the issue? Please let us know if you have any follow up questions