Jobs & Executions
Ask questions about Jobs & Executions within TimeXtender
- 66 Topics
- 246 Replies
Intermittent execution issues
I’m regularly running into the “Cannot open server 'sql-instances-prod' requested by the login. Client with IP address '18.104.22.168' is not allowed to access the server.” error message, especially during overnight executions by the execution scheduler.But I can’t replicate the connection issue, i.e. it seems to be intermittent:Has anybody else encountered this issue and do you know what would be causing it / what the workaround might be?
Job logs lacks detailed information
Hi,We are currently trying to figure out the new gen of Timextender, and I’m finding the setup with jobs a bit lacking. Perhaps I’ve just missed something, but here are a few things that bug me. As far as I can understand, the way to go when scheduling in the DW is to set up your execution packages in the execution tab. Then we need to set up a Job that schedules the execution packages. In my trials I intentionally set up a package to fail, and first of all the monitoring view of jobs do not show any information about the error: Jobs monitoringIf I go into the execution log, I can see an error message that essentially just says that the job didn’t succeed.Execution log for test jobFor debugging that means I have to check the contents of the job (that potentially could have multiple execution packages in it) and then head over to the execution tab to check the log for the package, where I see all the details.Execution log in execution tabI would think it was nice to be able to reac
Schedular stopped working.
The scheduler stopped working in production. Yesterday I restarted but it still not running.How can I fix this issue?I have gone through the below document.Scheduled Execution issues - Did it not start, did it fail, or is my execution still running? – TimeXtender SupportOn my computer the recovery options were disabled.
Run transfer job only when ODX transfer task completed without errors
I want my transfer to MDW job to run only when my ODX transfer task has completed without any errors. What happens sometimes now is that for some reason there is an error in the extraction and some tables are empty, they then get pushed to the MDW and our reports break because the tables are empty. I see that you can use instance variables but I don’t see that option on my ODX. How can I set this up?
Scheduled excecution: prioritization not working
Hi Support, We are experiencing an issue with the prioritization in our execution package. Our trip table is updated with an update script using data from the number_per_trip table.However, the trip table is loaded before the numbers_per_trip table. This results in missing new data in the trip table as the number_per_trip table has not yet been loaded before the script action is executed. We would like to change the loading order of these tables and have tried to do this by adding prioritization. This has no effect on the load whatsoever. I cannot find out what the problem is.Are there any settings blocking the prioritization feature or are we using the feature in the wrong way?See settings below.
Excel Online connector sometimes fail with "Error while listing workbooks for drive"
Hi,I am running a legacy version of TimeXtender (version 22.214.171.124), where we have an Excel Online connection set up using CData ADO.NET Provider for Microsoft Excel Online 2022 (22.0.8389.0) as data source.This data source is configured to authenticate via Azure ‘client flow’, see picture. The app registration in Azure has the following permissions: The data source works properly most of the time. It is able to list the worksheets it finds on the SharePoint site, and can fetch data. However, scheduled execution packages sometimes fails with the following error message: Could not execute the specified command: Error while listing workbooks for drive: [generalException] General exception while processing. Details: Error while listing workbooks for drive: [generalException] General exception while processing. Module: System.Data.CData.ExcelOnline fx220l.yg at fx220l.IIu.m(Boolean ) at fx220l.IIu.X(tbp`1 , Boolean ) at fx220l.IIu.YI(LoL )
Environment Transfer, Deploy and Execution dependencies
We have a setup with 3 environments (Dev/Test/Prod) running in the legacy version, with a BU Based ODX, DSA, MDW and Several SSLs. Based on a on-prem SQL Server setup. Fairly common I guess.We are multiple developers on a shared project, making changes daily to the project, and therefore needs some QA process. Our target is to have changes and new functionality running on Test for a week before transfering to production. In addition to out centralised BI org, we support the data needs of analysts in the different departments. For this we have established replica databases of ODX and MDW. The analysts can read these replicas without interfering with the centralised data processing (The primary reason for the replicas). Along with the data read access, we have a database the analysist have the rights create objects in (typically views and stored procedures). The analyst environment will only be exposed on our prod platform.We would like to provide a better SLA for our Analysts for new ta
Custom Query not working
I’ve created a query table that suddenly started to fail. When I press Validate everything is fine.I can even preview the table in the sourceI can also run the query on the source. But when I execute the table, with the same user as above I get this message: Here’s the query:SELECT t1.[TRANSACTIONCURRENCYAMOUNT] ,t1.[ACCOUNTINGCURRENCYAMOUNT] ,t1.[REPORTINGCURRENCYAMOUNT] ,t1.[QUANTITY] ,t1.[ALLOCATIONLEVEL] ,t1.[ISCORRECTION] ,t1.[ISCREDIT] ,t1.[TRANSACTIONCURRENCYCODE] ,t1.[PAYMENTREFERENCE] ,t1.[POSTINGTYPE] ,t1.[LEDGERDIMENSION] ,t1.[GENERALJOURNALENTRY] ,t1.[TEXT] ,t1.[REASONREF] ,t1.[PROJID_SA] ,t1.[PROJTABLEDATAAREAID] ,t1.[LEDGERACCOUNT] ,t1.[HISTORICALEXCHANGERATEDATE] ,t1.[CREATEDTRANSACTIONID] ,t1.[RECVERSION] ,t1.[PARTITION] ,t1.[RECID] ,t1.[MAINACCOUNT] ,t1.[MODIFIEDDATETIME] ,t1.[CREATEDDATETIME] ,t2.ACCOUNTINGDATE ,t2.DOCUMENTNUMBER
Job Scheduling broke / Greyed out executions
Dear comunity and support,This morning my Job scheduling broke and it does not seem to go online anymore.I'm running the new 6221 version and I have three issues:I have an ‘invalid’ job. I've reset the services and re-added the instances to the Execution server but nothing seems to fix it. At first I was not able to add Data warehouse execution packages to a job, but then I read that you can't schedule ODX and Data warehouse executions in one job. So i stopped the ODX Server service on windows and was then able to add the jobs and run them. This was going fine until I showed my collegues. How can I fix the job? I cannot add Data warehouse executions to my Job. Most of the time I cannot see the packages. I find the packages when I deselect the ‘Hide objects that can't be added’, but I cannot add them. During a brief time I was able to add them, when i stoped the ODX Server Service, but this must have been a fluke as I cannot imagine that I have to stop services to get a certain resu
Triggering job to start after another job completes
We have 2 jobsJob 1 runs at 11pm and normally runs 1 hour. Job 2 runs at 1am and must run after Job 1 completes, this job usually runs for 6 hours. On occasions Job 1 runs 3 hours and this causes job 2 to miss it start window. How can I adjust the schedule so that Job 2 has 2 criteria? 1-Job 1 completes 2-Has to start after 1am
Refresh Power BI Premium dataset
Hi,We are currently migrating cubes from Power BI report Server ( On-premises) to Power Bi premium.Those cubes are not maintained in TX & refreshes are triggered through a SSIS package called by TX at the end of the ETL execution process.In the new implementation (where the tabular models are still not maintained in TX), we will be using XMLA endpoints & would like to trigger partitions refresh from within TX without using a SSIS package.What would be the best approach to tackle this, taking into account we’re running on an Azure platform (so TX VM and Azure SQL DB for DWH) ?We were thinking of using an ADF pipeline or use REST api calls, but this isn’t supported by TX version 20.10.x. Any ideas on possible alternatives ? ThanksPeter
Language error in Execution Server Configuration dialog
When associating instances to an Execution Server in 6221.1 I noticed that the message displaying locked instances has a slight grammar mistake: “The following instances that currently locked by another machine” should probably be something like “The following instances are currently locked by another machine”.
Unstable extract from Synapse Serverless SQL Pool (Business Units)
We have a Dynamics 365 Finance and Operations system, where we have the Data-lake export running (for over 1 year). THe setup is similar to Josephs sketch solution for the following question:Dynamics 365 F&O Data Lake as a TX source | Community (timextender.com)We have it running quite stable on one of our boxes, but are struggling with stability on a pre-production box.The goal is to use Azure AD Integrated Authentication, and the user running the scheduler has been given access to the Serverless SQL Database (tested in SSMS)First we experienced some missing prerequisites when using Azure AD auth:Execute ODX d365fo_dl.ACOJournalTable_BR ADO.NET Transfer:Error:Failed -Execute ODX d365fo_dl.ACOJournalTable_BR ADO.NET Transfer 'Failed' Unable to load adalsql.dll (Authentication=ActiveDirectoryIntegrated). Error code: 0x2. For more information, see http://go.microsoft.com/fwlink/?LinkID=513072 Details: SQL Server: 'import-d365fo-data
Chain of execution packages are not able to run concurrently
We are seeing a situation where the next scheduled execution of a package does not start and the event is logged that this is because the previous execution is still running. However, when we closely look at finish times for these executions, the actual finish time is on average 8 minutes before the start of the next execution. I've been told that there can be a delay of a couple of minutes between the task itself actually finishing and the moment the process that executed the task is ended as well. However, 8 minutes and sometimes even up to 13 minutes seems too long.Anybody any ideas on why this occurs and what could be done about it? We're currently running TX version 20.10.25.
Incremental Load Execution Error Due To Change Of Filter
Hi Support, Mount Anvil have a Incremental project to run the incremental load of the finance system data. Changing the incremental execution rule on a table called G/L Budget Entry from the 'Modified At' field to the 'Last Date Modified' field is preventing the execution from running. The 'Modified At' field is in a date\time format and the 'Last Date Modified' field is just a date only format that appears to be causing the execution failure. The error messages caused are below: Finished executing project 'Incremental' Execution Package 'Update Project' Execution failed Start Time : 24/01/2023 16:31:05 End Time : 24/01/2023 16:32:33 on server: MAV01APP01 -Execute Execution Package Update Project 'Failed' -Execute Business Units 'Failed' -Execute Business Unit Business Unit 'Failed' -'One or more errors occurred.' -Execute JetBCStage_I 'Failed' -'One or more errors occurred.' -Execute Table JetBCStage_I TEST.BC_G/L Entry (17) 'Successful' -Execute Table JetBCStage_I TEST.RowCountGL 'Su
ODX Dev/Prod refresh
Hi,I have the following setup in my Dev/Prod environments:ODX Shared, all on my live environment DSA / MDW separate for each environmentA daily refresh starts with ODX and on success moves to DSA. So far so good.On the ODX step, there's a Usage condition for Environment (project variable) = ‘Prod’.I added this because I don't want to start 2 refreshes of the same ODX from both Prod&Dev.However, the Dev environment goes straight into the DSA refresh because the ODX ‘starts’ and finishes instantly. Meaning the DSA only gets a few thousand rows from the still-refreshing ODX (largest table has 2000 rows where I'd expect 2million+).Is my assumption correct in using the Usage condition? How can I use the same Execution packages for Dev/Prod while still using the same ODX ?Thanks for any help!
Jobs: "Hide objects that can't be added"
When adding a job I see a checkbox 'Hide objects that can't be added’. After I uncheck this checkbox I see that my execution schedules for both the data warehouse and the semantic layer can't be added to the job. Could you please tell me what might be causing this?And in general I'd like to know when an object can't be added to a job.
Login to the community
No account yet? Create an account
Login with SSOSSO login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.