Skip to main content

Timeouts in TimeXtender Classic

  • February 24, 2025
  • 0 replies
  • 6 views

Christian Hauggaard
Community Manager
Forum|alt.badge.img+5

There are a variety of places where you can set up timeouts. They will work differently based on where you change it. This article covers the different timeout types and where they are set up and used. Also it should be noted that if you add 0 it means that it will wait a infinite amount of time. 

The two types of timeout

Connection

The time in seconds, to wait for a connection to open. The default value is 15 seconds. It should be increased the slower the connection is. This is mainly relevant on external data sources. They could be placed on the other side of the globe physically, so an increase might be in order.

Command

The time in seconds, to wait for the command to execute. We set it as default to various numbers in different places in TX. This is the amount of time we wait from one command to the next, it could be a part of a data cleansing procedure. This is something you increase if you have high data loads that needs to run through multiple changes. So it wont be necessary when using a one to one copy, but when using lookup fields that uses a Top, or similar.

Where to change timeout settings

  1. Data source timeouts. Such as SQL. The standard settings is the same in all adapters except excel, text and to some degree Any Source. In Any source connectors there might be a field you can change that contains timeout settings. The defaults are Connection 15 and Command 100. The reason it is waiting 100 and not higher, is because it only does the transfer from one server to the other and isn't doing any of the data cleansing.
  2. In the DSA Staging database. Here the default for Command timeout is 1800 seconds, as a lot of data cleansing will be done in here.
  3. In the Data Warehouse databases. It is similar to the Staging database in its standard settings.
  4. Command timeout in Qlik Sense Server

Timeout errors and where to increase the timeouts

The standard settings are fine and most will not get any reasons to change them. Sometimes though you will start getting timeout errors.

Start by locating at what step the timeout occurred.

  1. Is it the first thing that happened when executing a table from a data source? Then it is the connection to the data source that needs to be increased. Place 1. in the list
  2. Is it during the transfer step of a table in the Staging database? Then you would have to increase the data source command timeout. Place 1. in the list.
  3. Is it is during data cleansing step of a table in the Staging database? Then you will need to increase the command timeout on the stage db. Place 2. in the list.
  4. Is it happening on the Data Warehouse during transfer? Then you will need to increase the command timeout on the stage db. Place 2. in the list.
  5. Is it happening on the Data Warehouse during data cleansing. Then you will need to increase the command timeout on the Data Warehouse db. Place 3. in the list.
  6. Is it happening during the OLAP execution. Then you will need to increase the command timeout on the Data Warehouse db. Place 3. in the list.
  7. Is it happening during the Qlik execution. Then you will need to increase the timeout on the Data Warehouse db. Place 4. in the list.

What to do about the timeouts

Increasing the amount of time you wait can work for some issues, but it is not something that solves all issues. Mostly it is figuring out how to make the execution faster. This is a big thing, but below is some of the most common solutions.

First you should figure out why it happens.

  1. Does it happen during the nightly execution with a timeout on the data source? Then it might be a loss of connection to the data source server. Maybe it has a restart service task at night, or something similar.
    • Choose what happens when it fails.
    • Some of the data source adapters have a batch size option. Decrease it, so you split out the execution in more parts.
    • Some are set to 0 by default, so setting it to a number will reset the timeouts after each batch has been transferred.
  2. Does it happen during the transfer step and on what table? How is that table set up and how much data is it containing.
    • Set up incremental load on this table.
    • If automatic index generation turned off, then turn it on.
  3. Does it happen during the data cleansing step of any table? What is happening during this, like how many lookup fields are there and how many tables does it relate to?
    • If automatic index generation turned off, then turn it on.
    • Set up incremental load on that table.
    • Is the lookup type Partition by, or Top? Then change it if possible to Group by.
    • Do you have a join that is not equal. E.G. larger than, or smaller than. Consider doing something similar to this
    • If you can change the lookup fields, so as many as possible is coming from the same table, there will be made a overall group by containing all these fields.
Did this topic help you find an answer to your question?

0 replies

Be the first to reply!

Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings