Skip to main content

Dear comunity and support,

This morning my Job scheduling broke and it does not seem to go online anymore.

I'm running the new 6221 version and I have three issues:

  1. I have an ‘invalid’ job. 
    I've reset the services and re-added the instances to the Execution server but nothing seems to fix it. At first I was not able to add Data warehouse execution packages to a job, but then I read that you can't schedule ODX and Data warehouse executions in one job. So i stopped the ODX Server service on windows and was then able to add the jobs and run them. This was going fine until I showed my collegues. How can I fix the job?

     

  2. I cannot add Data warehouse executions to my Job. Most of the time I cannot see the packages. I find the packages when I deselect the ‘Hide objects that can't be added’, but I cannot add them. During a brief time I was able to add them, when i stoped the ODX Server Service, but this must have been a fluke as I cannot imagine that I have to stop services to get a certain result in my tool.
  3. Data on demand: I'm trying this new feature, but it does not seem to be doing anything. So the new feature can be found in the Advanced Settings of the Data Source. I turned it on and my expectation was when a Job loads a Data warehouse execution package that it would also run the ODX tables. Which is a great new feature, but during my tests nothing happend on the ODX side. Has anyone tried this yet?

Thanks for you help all!

Take care

Daniel

After fidling around I have a work around for issues 1 and 2:

  1. Stop the Execution Server service and the ODX Server service
  2. Start the Execution Server Configuration
  3. Take out all the check boxes
  4. Save
  5. Rerun the Execution Server Configuration
  6. Check all the boxes you want to check
  7. Start only the Execution Server service
  8. Check in TimeXtender => the Jobs had not red cross anymore
  9. Start the ODX Server service.

Hopefully support or R&D knows what it going on. Will keep track of it the coming days.

 

Take care

Daniel


Thanks @daniel 

I had the same problem. It works now 😀

/Anders


I upgraded from 6143.1 to 6221.1 and dit not encounter this. I did run the Execution Server Configuration before starting the upgraded TX and made sure all the instances were upgraded as the MDW instances were causing jobs to be listed in red before upgrading.

I am not sure whether there is a process to follow, I just improvised more or less. I think having some steps to follow like for the 20.10.x release would be good.


Dear Rory, I did a clean install on a new server. I get the feeling that the two services (which are both on different service accounts) are not completely aligned with eachother.

Thanks for the tip when upgrading, I'll keep that in mind.


Hi @rory.smith 

I had this problem before 6221 also. It didn’t turn up as a result of the upgrade.

/Anders


So if you make a new install on a new machine, perhaps the old machine's Execution Server still has a ‘lock’ or something similar? It may be that the config app says something like that, but to be honest I was in “next, next, next” mode doing this recent upgrade and didn't memorize it.


If you run a service configuration of some sort on a server it will get the lock. You will then have to run the service again in a newer version or on another server to take the lock.

I also did an upgrade today and got the same issue, good you already found a workaround.


Hi @daniel regarding the 3rd issue, I am unable to reproduce it. 

I have a TimeXtender SQL data source mapped to an ODX instance with data lake storage.

I enable data on demand on my data source, select a table and synchronize (I do not execute a transfer task) and then I add the table to my DW instance and deploy and execute, and the data is present in the DW table when I preview.

Do you notice something different in your setup or the steps you took? If so could you please share a video recording


Perhaps the issue is more that you don't see that it has run in any place other than the Azure Storage account. There does not seem to be any execution logging on the ODX itself. I.e. if I have an On Demand enabled source and drag a table to a DWH instance and Deploy & Execute it, I can see an extra folder for a new extraction in the Azure Data Lake Storage but no evidence elsewhere.


Dear @Christian Hauggaard ,

It is just like @rory.smith  posted. I can not verify that the ODX has triggered a execution. This makes my skin crawl as I’m not sure if the data is now up to date. Is there a way to verify that the ODX data has been updated as well?


@daniel your feedback concerning on-demand logging has been passed onto our Product Team. Currently, this cannot be verified within TimeXtender


@daniel your feedback concerning on-demand logging has been passed onto our Product Team. Currently, this cannot be verified within TimeXtender

Dear @Christian Hauggaard ,
Do you happen to know the status of this? I know that there are some idea’s on this as wel, but i’m just curious as i’ve just encountered issues with this at my client


Hi @Christian Hauggaard 
I’ve begun a migration from V20 to V21 on 6698.1 - I guess these are still known issues with no fix other than the workaround described, or are they receiving some focus from the Product/Dev team?
Notifications from ODX still being only on “critical” failures is a concern, as my Job holds 13 data source transfers, and if any of them are “Complete with errors” I get zero notifications, that’s not great as who has time to manually check after each run? Granted that’s the same behaviour as V20, but by now I would’ve expected more comprehensive and automated reporting in TX.


Hi @jon.catt,

I cannot speak on behalf of the devs, but I do believe this is still a priority 😉

As of right now, you need to use eXmon in order to enable notifications on failures of ODX tasks. It's included in your license. I personally do not use ‘on-demand’ setting on the data sources, for a variety of reasons including less logging, less control, and time consuming reloads of ODX tables while reloading DSA tables during development.

You can use eXmon to get around the issue that ‘on-demand’ attempts to solve - inability to schedule ODX and DWH tasks in a single job. You can create a process in eXmon that first executes in the ODX on a schedule, and continues with DWH when that is complete. 

Kind regards,

Andrew


@daniel on-demand logging is currently logged, please see task details (bottom pane) in screenshot below

@jon.catt please see and upvote the following idea, in the comments, my colleague describes the workaround that Andrew also mentions above:

 


Reply