Recently active
Version 7026.1 - Ingest into Lakehouse I’ve got a SQL data source where some column names contain spaces. This causes the TX SQL connector (23.0.3.0) to throw the following error when I try to ingest data into a Lakehouse: Executing table dbo_tx_profitcentreinfo:failed with error:Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema. Please enable Column Mapping on your Delta table with mapping mode 'name'.You can use one of the following commands. If your table is already on the required protocol version:ALTER TABLE table_name SET TBLPROPERTIES ('delta.columnMapping.mode' = 'name') If your table is not on the required protocol version and requires a protocol upgrade:ALTER TABLE table_name SET TBLPROPERTIES ( 'delta.columnMapping.mode' = 'name', 'delta.minReaderVersion' = '2', 'delta.minWriterVersion' = '5') I can read what it says 😀 but I thought I’d ask the Community for advice. Has anyone encountered and resolved this issue? If so, what do you recommend
Hi, I have two questions regarding Orchestration, schedule groups. (1) There is the option to make a schedule based on a SQL custom script. When I try to test this functionality out with the following statement: SELECT CAST('2025-06-10 01:00:00' AS DATETIME) It gives me the following pop-up: Does someone know why I don't get query result = '10-06-2025 01:00:00’? The SQL statement produces a valid datetime value in SMSS. (2) I need to schedule a job that runs four times per day for a project. I wanted to build this schedule with the custom SQL option and by using a calendar table in our database from the Prepare Instance. Is this possible? Thanks in advance!
Hi, I'd like to make a new data source connection using REST to connect with our Azure storage table. Thus far I have managed to get the connection setup, however pagination remains a issue. When testing our connection and following the steps in the debug logging file I see that when a table has a next: "x-ms-continuation-NextPartitionKey” and "x-ms-continuation-NextRowKey” it will return this in the header (not the body). But if the table doesn't have this it will stop since the "x-ms-continuation-NextPartitionKey” header was not found. As I am applying these pagination parameters as query parameter I need to be able to solve this issue. Is there maybe a way to apply a default value to these variables (manipulating query doesn't work as the variable itself already won't be found) or some way to dynamically replace a URL? Example table with no NextPartition/rowkey: When variable value replaced with default: Thanks for any input someone can provide. Kind regards, Robbert
I am loading a table incrementally into my ODX storage in an Azure Data Lake, where new Parquet files are added daily. This approach is because the source only holds data every two weeks, and I want to maintain a log in the ODX. The Parquet storage is very compact. However, for downstream analysis, I only need to retrieve data from the last 1 to 2 days into my prepare instance. I am using a data selection rule on the mapping, and I have also tried applying it directly on the table. Both approaches take a very long time to complete (+1 hour), whereas running the same query on the source SQL database filtering for 2 days of data completes in about 10 seconds. I suspect that the prepare instance is scanning through all the Parquet files, including older days, causing the slow performance. My question: Is there a way to configure the TX prepare instance to only process the most recent X Parquet files (e.g., the last 2 days) instead of scanning all files? This would significantly improve th
1 like
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.