TimeXtender API Endpoints

TimeXtender API Endpoints
Userlevel 6
Badge +5

This article describes the various endpoints within the TimeXtender API and how to send requests using the TimeXtender Postman collection. To use the collection, download and install Postman, download the collection, open Postman and import the collection into Postman.

Prerequisites

The API Key needs to be setup in order to use the TimeXtender API endpoints.

  1. In Postman, select the Public - Jobs collection that has been imported in the left sidebar
  2. Select the Variables tab
  3. Enter the API key in the Current value for the variable apiKey (for more info on how to create an API key see API Key Management)

Once entered, the apiKey variable is used for the entire collection, as the variable is referenced in the request headers for the various calls. Alternatively, the variable above can be entered manually within the request headers for the individual calls. For example, the {{apiKey}} variable for the job status request can be replaced with the actual API key.

Variables

The following variables are included in the Postman collection:

  • ApiKey - the API key that you have created in the Portal for your organization
  • JobId - the Id of the job you want to execute, check status of, etc. Use the "Get all jobs" endpoint in order to retrieve the Id for your jobs
  • TriggerId - the trigger Id of a specific execution of a job. This Id is returned when running a POST call to execute a job
  • Domain - the base domain of the TimeXtender API gateway serving the public API

As mentioned in the Prerequisites section above, ApiKey must be added in the variables section in Postman. For some of the endpoints, JobId and TriggerId variables also need to be added by modifying the current value in the Variables section. 

TimeXtender API Endpoints

Get All Jobs Endpoint

GET /public/jobs

This endpoint retrieves a list of all jobs for your organization. Each job consists of the following values:

  • Id (guid) - the Id of the job. This will be used for most of the remaining endpoints as an identifier
  • Name (string)  - the name of the job

Execute Job Endpoint

POST /public/jobs/{job_id}/execute

This endpoint will queue a job for execution by setting its status to "pending". The job will then be picked up by the application, at which point it will start the job execution along with execution logs.
If successful, an object will be returned with the following properties:

  • JobId (guid) - the Id of the job which has been queued to execute.

  • Message (string) - a message explaining that the job has been started.

  • TriggerId (guid) - a trigger id is unique to an execution and will be attached to the created logs once the job has been started. It can be used to filter logs.

Get Job Status Endpoint

GET /public/jobs/{job_id}/status

This endpoint can be used to check the current status of a job. The following status codes exist:

  • 0 - None
  • - Pending
  • 2 - Running

A job will be "idle" until it has been executed. It will then go to status "pending". Once picked up by the application, it will change to status "running". When the execution finishes (or fails), the job will return to status "idle".
The status endpoint will return an object with the following properties:

  • JobId (guid) - the Id of the requested job.
  • Status (int) - the status code.
  • Message (string) - a message explaining what the status code means.

Get All Job Statuses Endpoint

GET /public/jobs/status

This endpoint will return a list of statuses for all jobs in an organization. The status objects in the list will look identical to the one from the Get Job Status endpoint.
The all jobs statuses endpoint will return an array of objects with the following properties:

  • JobId (guid) - the Id of the requested job.
  • Status (int) - the status code.
  • Message (string) - a message explaining what the status code means.

Get Job Logs Endpoint

GET /public/jobs/{job_id}/logs

This endpoint returns an array of job executions, as well as logs for a specific job.
job execution has the following properties:

  • Id (guid) - the Id of the job execution.
  • JobId (guid) - the Id of the job that has been executed.
  • State (int) - the state of the execution.
    • The following states exist:
      • -1 - none
      • 0 - created
      • 1 - running
      • 2 - completed
      • 3 - failed
      • 4 - completed with errors
      • 5 - completed with warnings
  • CreateTime (string) - the create time of the execution.
  • EndTime (string) - the end time of the execution (NULL if the execution is still running)
  • TriggerId (guid) - the trigger id of the execution (empty guid if started from the application)
  • JobExecutionLogs (array) - an array of job execution logs.

job execution log has the following properties:

  • Id (guid) - the Id of the log.
  • JobExecutionId (guid) - the Id of the job execution which the log pertains to.
  • TimeStamp (string) - a timestamp of when the log was created.
  • Severity (int) - the severity of the log.
    • The following severities exist:
      • 0 - Information
      • 1 - Warning
      • 2 - Error
  • Message (string) - a message explaining what has happened in the log.

Get Job Logs for a Trigger Id Endpoint

GET /public/jobs/{job_id}/logs/{trigger_id}

This endpoint returns a similar result as the Get Job Logs endpoint, except it will only return logs with a trigger Id matching the one given as a route parameter. In other words, this endpoint allows filtering of logs based on a trigger Id.

 

 


20 replies

Userlevel 3
Badge +1

This is great and exactly what we have been waiting for! The Get Job Logs Endpoint will be especially useful for us since we have our own monitoring app in Power BI using this info.

Userlevel 3
Badge +4

Hi @Christian Hauggaard ,

I like to create a table in TimeXtender that contains the last date and time a datasource in the ODX Server is successfully loaded. 

I'm looking for a API endpoint that can give me the last successful data transfer per datasource in the ODX Server. The Job Logs endpoint give me some insights, but I cannot figure out which datasource/data transfer task the Job Log belongs to.

Userlevel 6
Badge +5

@bas.hopstaken the Job Logs Endpoint refers to the name of the transfer task in the message (although not the data source). So if you extract this data you should be able to find out which transfer task completed successfully at which time. As a workaround, if you name the transfer task so that it contains the data source name, then you should be able to tell which data source the transfer task belongs to. Please feel free to submit an idea here.  

 

Userlevel 3
Badge +4

@Christian Hauggaard : I will rename my transfer tasks to include the data source name. But nevertheless it will be great if we can extract the status/execution logs from the ODX tasks and MDW/SSL instance Execution Packages ass well, using the API.

Badge

Hi @Christian Hauggaard ,

I would like to set up a new Data Source in TimeXtender which extracts data from the TimeXtender API Endpoints. However my knowledge in the new version of TimeXtender is quite limited, especially when it comes to configuring REST API data sources in the portal.

I have managed to extract the get job logs in postman in accordance with your guide. I would like to do extract the same info directly in TimeXtender. 

Could you provide me with guidance how this could be accomplished?

 

Best regards,

Markus  

Badge

Great to hear that the API is live! This will be a welcome development for our V20 clients that have been hesitant about moving to V21 due to a lack of this type of feature.

Now, when are endpoints coming for the Catalog? Another demand for our clients is to be able to surface the design and lineage information outside of TimeXtender so that it can be shared with their end user community. A set of endpoints to surface that information would be amazing!

Userlevel 5
Badge +5

I have been working on creating some RSD files to use for this.

To make them work you need to change the locations where I have added xxxx

Once you have added either your own API key and if necessary an existing Job Id it will work. You only have to add the path to the location of the files in the Location field in the REST data source.

Try them out, I have some different methods I use, some may work better for your setup.

To apply any of the query slicers, you will need to create a managed query like this.

SELECT [_id], [CreateTime], [EndTime], [Id], [JobId], [State], [TriggerId] FROM [REST].[JobLogs]
WHERE [JobId] IN (SELECT [Id] FROM [REST].[Jobs])

 

Userlevel 2
Badge

Hi team, 

Quick question; which aspect of jobs and executions in TimeXtender are called a trigger in this API? 

Kind regards,

Andrew

Userlevel 6
Badge +5

Hi @andrew.gebhard 

A Trigger Id is unique to an execution of a job and will be attached to the created logs once the job has been started. The primary purpose of a Trigger Id is to filter logs. Please note that only scheduled executions will generate a Trigger Id. Manual executions of a job will result in the below blank Trigger Id.

For example if a job is scheduled to run twice a day. Then two new Trigger Id’s will appear in the job logs per day for that job.

Please let me know if this answers your question?

Userlevel 1

My development team tracks modifications in the description field of each object, using a #-sign to reference the specific DevOps work item driving the change.

We extract these descriptions from the local project repository and augment them with information from our DevOps Work Items.

I have documented this process in an article, which you can read here: https://www.linkedin.com/pulse/powerbi-timextender-devops-enhancing-deployment-insight-marco-noordam

However, with the transition to a new cloud repository, this method is no longer viable.

Perhaps TimeXtender could adjust to our workflow, providing a comprehensive changelog for each object that is directly linked to DevOps? ;-)

 

Userlevel 6
Badge +5

@mnoordam innovative solution, thanks for sharing! Please feel free to submit a product idea here: https://support.timextender.com/ideas

Userlevel 1

@Christian Hauggaard will do, thx!

 

For those that are using TimeXtender with Qlik Cloud, we have a blog post about how to schedule TimeXtender jobs with Qlik Application Automation that you may find interesting: https://www.bitmetric.nl/blog/schedule-timextender-jobs-qlik-automation/

How long are jobs logs retained by the API? There doesn't seem to be any paging on the job logs endpoint. Won't this data set become quite bulky after a while? Would also be good to have date filters so we can use incremental loading.

How long are jobs logs retained by the API? There doesn't seem to be any paging on the job logs endpoint. Won't this data set become quite bulky after a while? Would also be good to have date filters so we can use incremental loading.

 

 @Christian Hauggaard  Any input on this?

Userlevel 5
Badge +5

Hi @bitmetric_barry 

The logs are based on the execution logs and if it is like the other log options it will be present for 90 days.

This is where our option to use this in a data source comes in. For example using our TX REST Data Source.
This is why I made a new guide that points to this. No RSD files are needed.

 

Hi @Thomas Lind,

 

Thanks for your answer. Anyway to specify a filter on the TX API? For example to only retrieve logs for the past 3 days.

 

Thanks,

Barry

Userlevel 5
Badge +5

Hi @bitmetric_barry 

It does not look like you can filter the rows with any parameters. The only other filter option is the triggerId.

It is a good idea for an new option though.

Is this available for all versions? (am using legacy version)

Userlevel 5
Badge +5

Is this available for all versions? (am using legacy version)

Hi @htrimas 

You can only connect to it with an Api Key and this is a SaaS release only part.

In the LTS (legacy) release we have the TX Dictionary template project and the Query ODX local Backlog SQLite database option that can give info about executions.

You can also see past executions in the portal.timextender.com page under projects.

 

Reply