Recently active
I am loading a table incrementally into my ODX storage in an Azure Data Lake, where new Parquet files are added daily. This approach is because the source only holds data every two weeks, and I want to maintain a log in the ODX. The Parquet storage is very compact. However, for downstream analysis, I only need to retrieve data from the last 1 to 2 days into my prepare instance. I am using a data selection rule on the mapping, and I have also tried applying it directly on the table. Both approaches take a very long time to complete (+1 hour), whereas running the same query on the source SQL database filtering for 2 days of data completes in about 10 seconds. I suspect that the prepare instance is scanning through all the Parquet files, including older days, causing the slow performance. My question: Is there a way to configure the TX prepare instance to only process the most recent X Parquet files (e.g., the last 2 days) instead of scanning all files? This would significantly improve th
Hi, I'd like to make a new data source connection using REST to connect with our Azure storage table. Thus far I have managed to get the connection setup, however pagination remains a issue. When testing our connection and following the steps in the debug logging file I see that when a table has a next: "x-ms-continuation-NextPartitionKey” and "x-ms-continuation-NextRowKey” it will return this in the header (not the body). But if the table doesn't have this it will stop since the "x-ms-continuation-NextPartitionKey” header was not found. As I am applying these pagination parameters as query parameter I need to be able to solve this issue. Is there maybe a way to apply a default value to these variables (manipulating query doesn't work as the variable itself already won't be found) or some way to dynamically replace a URL? Example table with no NextPartition/rowkey: When variable value replaced with default: Thanks for any input someone can provide. Kind regards, Robbert
Hello, TimeXtender 20.10.51 I’m looking for support with an issue we’re experiencing where execution packages are running but no tasks are being completed while causing 100% usage on the repository. After deploying to production, we ran a package manually and noticed it not proceed past the above step. The “current tasks completed” stayed at 0, and no data was being loaded, even after waiting up to 30 minutes. We tried running it multiple times, but nothing changed. Running individual tables worked fine. We checked the SQL database usage for the DSA to be sure, and it wasn't being affected—no data was loading as expected. While troubleshooting, we found that the repository (a standard 100 DTU Azure SQL database) was hitting 100% data IO and DTU usage. One query, in particular, was using nearly all the available resources. It was this query: (@ObjectId uniqueidentifier)SELECT [StepId], AVG(DATEDIFF(s, [Start], [End])) AS [AvgSeconds] FROM [dbo].[ExecutionPackageLogDetails] WHERE [Object
Hi, I have two questions regarding Orchestration, schedule groups. (1) There is the option to make a schedule based on a SQL custom script. When I try to test this functionality out with the following statement: SELECT CAST('2025-06-10 01:00:00' AS DATETIME) It gives me the following pop-up: Does someone know why I don't get query result = '10-06-2025 01:00:00’? The SQL statement produces a valid datetime value in SMSS. (2) I need to schedule a job that runs four times per day for a project. I wanted to build this schedule with the custom SQL option and by using a calendar table in our database from the Prepare Instance. Is this possible? Thanks in advance!
2 likes
1 like
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.
We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.