Jobs & Executions
Ask questions about Jobs & Executions within TimeXtender
- 87 Topics
- 334 Replies
Hello,Is there any posibility to know how much data was loaded through the data warehouse during a scheduled execution package? Either through TX repository, logs or through system files in the azure database?Let’s say my customer has a databasewith 500gb worth of data, but we load only certain tables (some full, some incremental) and would like to know how much data we push every scheduled load.Thank you,Victor
Hi,I am running a legacy version of TimeXtender (version 18.104.22.168), where we have an Excel Online connection set up using CData ADO.NET Provider for Microsoft Excel Online 2022 (22.0.8389.0) as data source.This data source is configured to authenticate via Azure ‘client flow’, see picture. The app registration in Azure has the following permissions: The data source works properly most of the time. It is able to list the worksheets it finds on the SharePoint site, and can fetch data. However, scheduled execution packages sometimes fails with the following error message: Could not execute the specified command: Error while listing workbooks for drive: [generalException] General exception while processing. Details: Error while listing workbooks for drive: [generalException] General exception while processing. Module: System.Data.CData.ExcelOnline fx220l.yg at fx220l.IIu.m(Boolean ) at fx220l.IIu.X(tbp`1 , Boolean ) at fx220l.IIu.YI(LoL )
Hey, Usually when an execution package fails to start it is due to another already running, then in the event viewer we get a message like this: Here, we see that the issue is due to another package. Now I have noticed that we occasionally also get messages like this one: Difference being that it does not show which ‘other’ package is causing this issue. Is this a common occurrence?
Hi everybody, Probably unnecessary but for the ones living/working in Europe I just wanted to inform that daylight saving time will change to winter time this weekend. This means that for those who have an ODX server or the “new” version (6XXX) of TimeXtender, the scheduled tasks/jobs will take place one hour earlier than currently set.This also brought me to the question how we could automate this? Any thoughts about this? Maybe an option in TimeXtender to use local time instead of UTC?Kind regards,Devin
Hi guys, I cannot get the companies and deals table to execute in the ODX phase. The error I get: Executing table [HubSpot].[Companies]:failed with error:System.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. ---> System.ComponentModel.Win32Exception (0x80004005): The wait operation timed out Has anyone else seen this before?
Hi all, at a client we have encountered a curious situation with execution e-mail notifications, hopefully you will be able to help us figure it out.There are 5 scheduled execution packages for BU's that run every morning. Since two days we are getting e-mail notifications saying these have failed, but when we look in in TimeXtender they in fact have executed successfully.Each BU execution throws a different connection error message:ERROR [HY000] [MySQL][ODBC 8.0(w) Driver]Can't connect to MySQL server on 'XXXXXX' (10060) Unable to connect to any of the specified MySQL hosts. The remote server returned an error: (400) Bad Request.  Could not execute the specified command: HTTP protocol error. 500 Internal Server Error. ERROR  could not connect to server: Connection refused (0x0000274D/10061) I will take the top one as an example to show you how it looks. Attached is the full fail notification e-mail.This package includes has the following settings: This is what the executi
Hello Today I get this error, wenn I want to add new Job. We didn’t change the version and use TimeXtender 6143.1. We updated all of data sources but doesn’t change. Firstly, I want to understand why I am receiving this error. If I encounter this message in the future, I want to comprehend what I should do. There is a similar topic, but since there is no version change, I want to reopen it. Xpilot recommends that we should use the new version. Is it correct?
Hi Support, We first raised this problem/bug back in Feb 2022. It seems that the incremental rules are not working correctly. The client Xandor had removed these incremental rules from their Live system some time ago but re-introduced them this month, and have failed. The problem seems to be related to using multiple incremental rules e.g. 'MODIFIEDDATETIME > (Last Max Value) - 3600 seconds & TRANSDATE > (Last Max Value) - 15,552,000 seconds. Please see attached document for further information on this. The client is currently on version 22.214.171.124 and have TX configured to use SSiS for data transfers. Do you know if there is a solution to this issue please? Thanks,Kashif
Hello!I want to experiment a little and check if changing my Max Threads setting can improve execution times. I have one big package which includes multiple other packages as Steps.If I change the Max Threads setting in the main package, do the packages from the included steps inherit this setting, or do I have to manually change the setting for each included step?Side question; I have plenty of data for execution times at 4 threads. My plan is to do three runs at 3, 5 and maybe even 6 threads and compare the averages. Is that a good setup? Thanks.
Is it absolutely necessary to perform a transfer between environments after an upgrade?Dev, QA and Prod are all sitting on their own server. I have upgraded all 3 to the latest TimeXtender version and all Deployed and executed perfectly. Would I still have to do the “Multiple environment Transfer” or is this not necessary?Upgrade Dev - Deploy, Execute, & Validate all is working. Upgrade Test - Deploy, Execute, & Validate all is working. Transfer Dev -> Test - Deploy, Execute, & Validate Test to ensure all is working. Upgrade Prod - Deploy, Execute, & Validate all is working. Transfer Test -> Prod - Deploy, Execute, & Validate Prod to ensure all is working
Hi,We are currently trying to figure out the new gen of Timextender, and I’m finding the setup with jobs a bit lacking. Perhaps I’ve just missed something, but here are a few things that bug me. As far as I can understand, the way to go when scheduling in the DW is to set up your execution packages in the execution tab. Then we need to set up a Job that schedules the execution packages. In my trials I intentionally set up a package to fail, and first of all the monitoring view of jobs do not show any information about the error: Jobs monitoringIf I go into the execution log, I can see an error message that essentially just says that the job didn’t succeed.Execution log for test jobFor debugging that means I have to check the contents of the job (that potentially could have multiple execution packages in it) and then head over to the execution tab to check the log for the package, where I see all the details.Execution log in execution tabI would think it was nice to be able to reac
I’m facing this issue:This package is scheduled to run every half an hour. On rare occacions the package executes itself as shown on the picture to the left (blue rectangle) and the execution log shows both executions as successful. We would like to know any insight to help us solve this issue. Some things to consider:At the customer’s request the package is being run from the Task Scheduler. The execution log does not show any errors as shown in the image above (green rectangle) This is how the retries are setup: The task scheduler showed this warning: We got this email throwing an error. However, the "network-related error” is nowhere to be found since according to the execution log it ran successfully:
One of my clients has a file data source that arrives at unpredictable times of the day. Has anyone found a good way to external-trigger-load such data source when it arrives? E.g. Webhooks, Logic Apps, ...The question is for 20.10.43, but I’m equally interested in hearing if there are solutions for v6284.1 as we’ll be moving in that direction.Thanks!
Hi TimeXtender, We’re seeing the following scenario: We have a execution package ‘A’ that is set up with an usage condition and a next package ‘B’ in the post execution steps.Now, when we manually execute package ‘A’ through the user interface and the usage condition resolves false, the display message appears with this information, package 'A’ won't start as expected and, importantly, the next package 'B’, from the 'Run Package’ setting, won't start as well.However, when the same execution package 'A’ is triggered through the scheduler, we see that even though package 'A’ won't start, because the usage condition returns false, the next package 'B’ in this case does start executing. Is this something you are familiar with or are able to reproduce? Extra information is that package 'B’ doesn't have an usage condition applied and we're running TX 126.96.36.199. Best regards,Luuk Bouman
Hello.I have a scheduled execution package which takes data from ODX, transforms data in DSA database and loads final tables to MDW. This execution package basically runs a perspective containing this process.I need to setup following logic:If data in any source table (ODX) of this execution package is empty (i.e. at least 1 empty table) Do not execute the package If data in all source tables (ODX) of this execution package contain any data (i.e. all tables non-empty) Execute the package Is there any way how i can setup this? Maybe even create additional “control” execution package? The version of TX i am running is 188.8.131.52 Thank you!
Hi all!We have created an external executable to run a python script, basically looking like this: cd F:\Data\Start-Process python GetData.pyexitNext we have added this to an execution package. If we run the package it will open a prompt and run the python script no problem.However if we add the package to a job and schedule or manually run that job nothing happens. How can we get this to work?cheers,Maarten
Good DayI have twice completed a differential deployment of a TimeXtender project and a Valid Table that is not listed in the review tasks is still being re-created on deployment.Is there functionality setup in TimeXtender that would re-deploy a table on deployment if it was listed as a deployed object dependency?Kerry
Good DayI have 2 TimeXtender Projects and in Project A I have an Execution Package that once completed, runs an External Executable to start an Execution Package in Project B.In Project B, I have an Execution Package that is defined to not allow any other concurrent packages, including the one called from Project A, when it is running.I am finding that this settings is being ignored if the Execution Package for Project B is being called form Project A.Has anyone come across cross-project Concurrent Package settings being ignored and how to overcome this?Kerry
hi,each time we do a deployment from DEV-PROD, the Txt scheduler is not working anymore. We use an older version of Txt (184.108.40.206) but still, the scheduler uses a wake up step that will d&e 1 table (retries=3, retries per step=3 and retry delay in min=5). If the step is successful, the next step will be executed. But since we deployed from DEV-PROD the scheduler is not starting anymore. Have removed the Wakeup step and created a new Wakeup have reboot the PROD machine (azure) but still no result. Can you pls tell me what needs to be done, it is urgent,thanks!!Mrt
Dear comunity and support,This morning my Job scheduling broke and it does not seem to go online anymore.I'm running the new 6221 version and I have three issues:I have an ‘invalid’ job. I've reset the services and re-added the instances to the Execution server but nothing seems to fix it. At first I was not able to add Data warehouse execution packages to a job, but then I read that you can't schedule ODX and Data warehouse executions in one job. So i stopped the ODX Server service on windows and was then able to add the jobs and run them. This was going fine until I showed my collegues. How can I fix the job? I cannot add Data warehouse executions to my Job. Most of the time I cannot see the packages. I find the packages when I deselect the ‘Hide objects that can't be added’, but I cannot add them. During a brief time I was able to add them, when i stoped the ODX Server Service, but this must have been a fluke as I cannot imagine that I have to stop services to get a certain resu
I am facing a problem when creating a job in the new TX version. I have two instances called ‘Ontwikkel’ (dev) and ‘Productie’ (production).I have copied my dev instance to production and now I want to set up a job to execute the production instance and several SSL's in production.I have created an execution package with all the MDW tables that need to be executed and this package was copied to the production instance. I also have several SSL's that point to the production instance as source.Now, when I add a new job (on the production environment), all looks well. I add the package and the SSL's and schedule the job:After that, then I re-open the Job and suddenly all selected execution objects have changed to the DEV (ONTW) version of the same objects: Am I doing something wrong or is this a (pretty nasty) bug?
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.