We are currently trying to figure out the new gen of Timextender, and I’m finding the setup with jobs a bit lacking. Perhaps I’ve just missed something, but here are a few things that bug me.
As far as I can understand, the way to go when scheduling in the DW is to set up your execution packages in the execution tab. Then we need to set up a Job that schedules the execution packages.
In my trials I intentionally set up a package to fail, and first of all the monitoring view of jobs do not show any information about the error:
If I go into the execution log, I can see an error message that essentially just says that the job didn’t succeed.
For debugging that means I have to check the contents of the job (that potentially could have multiple execution packages in it) and then head over to the execution tab to check the log for the package, where I see all the details.
I would think it was nice to be able to reach the execution message(s) straight from the Jobs log. And by extension it would be super nice if this information was available by the timextender API as that allows us to create a monitoring setup for the DW.
So, am I doing jobs wrong or would it be a good idea to increase the level of detail in the Job logs? As it is now I would much rather have the API track the logs for execution packages as they contain the most relevant information about flows in the DW.
As far as I can tell you're not doing anything wrong.
Up until this point I usually would have set up an email notification on failed executions. These generally would contain the error message as wel. I do have to this this would be in the 20.10 versions (old versions). In the older versions sually checking the execution history log is the way to get more info on the issue when an execution has failed
I'm not sure if the new version has a email notification on fails. Would be great to get an email and get this info through the API. I would suggest this as an idea.
Hope this helps
Good idea, will post as an idea instead.
We are going to set up alerts as well, so I will be looking into how those work for Jobs too. But, I really appreciate the possibility to read the logs in some structured way to get an overview of the schedule as well as a monitoring service to see what failed and why.
When working with the old versions, we would usually hook up a power bi app with direct query towards the repository database, which let us build some nice visualizations of executions in real time.
So hopefully we will get more access to the portal repository data moving forward 😊