We've released a new version of TimeXtender (Desktop v. 6143.1) with a bunch of new features and even more fixes - see what's new below.
Warning: The new version of TimeXtender does not support version 11 of the following data source providers:
- Azure Data Factory - MySQL
- Azure Data Factory - Oracle
- Azure Data Factory - PostgreSQL
- Azure Data Factory - SQL Server
Please use version 12 of these data sources with the new release.
New
- Free tier replaces trials: You can now use TimeXtender for free as long as you like without worrying about running out of credits. When you sign up for TimeXtender, you now start on the Free tier that never runs out, but comes with a few limitations. Existing trial accounts will be converted to free.
Limitations of the Free tier:- One user
- One semantic model
- One data warehouse
- One ODX
- One data source
- Azure Data Lake Storage cannot be used for ODX storage
- Dedicated SQL Pool (SQL DW) and Snowflake cannot be used for data warehouse storage
- Data warehouse on Snowflake: We've added support for Snowflake and now, for the first time, you can deploy a TimeXtender data warehouse to non-SQL data storage, and, of course, take advantage of Snowflake features. Our initial implementation requires an ODX that uses Azure Data Lake Storage with SAS authentication and only works with the Qlik, and CSV file endpoints in the semantic layer. On the data warehouse, only features supported by simple mode are available. Read more on how to Use Snowflake as data warehouse.
- Improved scheduling (Desktop): You can now schedule execution packages from DWH and SSL instances in the same job. This is useful if you, for instance, want to execute a semantic model just after the relevant tables in your data warehouse. Note that the instances must be mapped to the same TimeXtender Execution Server service.
-
On-Demand data warehouse ingestion: When the data on demand option is enabled, the data source will refresh each table in the ODX storage before transferring it to the data warehouse storage. This will work without configuring an explicit "transfer task" under the data source.
Changed (Portal)
- For consistency, we've added an 'Edit' button for each item on the 'Data sources' list.
Fixed (Portal)
- 17587: It was not possible to add a data warehouse with Azure AD as authentication (released as hotfix).
- 16800: 'Clone data source' had the wrong "breadcrumb".
- 17809: The input box for the 'Batch size' option on ODX and data warehouse storage would max out at 65536 when using the "up" button which is far below the valid maximum value.
- 17485: The Permissions list is now hidden from 'Edit company details' when the list is empty.
- 17319: The Merge button is now disabled when you've clicked it to prevent accidental additional clicks.
Fixed (Desktop)
- 16902: Issue with misleading text in the Synchronize window when synchronizing a data warehouse with an ODX
- 16686: An unnecessary 'Connection Changed' message could show up when using the Query Tool on the data warehouse
- 17878: Issue where "resume execution" would skip Table Insert and Related Records
- 16865: Data lineage for views in data warehouse to data warehouse fields was not working
- 16599: Previewing a query table in the ODX sometimes wouldn't suggest the query table's statement, but instead use "Select * from..."
- 16036: When reloading an instance using 'Save and Reload', the previously open tabs were not reopened accordingly. This has been fixed.
- 17482: Removing a table that was included in an Object Security Setup, would cause the next deployment of that Object Security Setup to fail, as the references from the deleted table were still there.
- 16708: Using Export Deployment Steps to a CSV file would cause a null reference error
- 17249: Allowing a table to be compressed could not be combined with having history enabled. Enabling page compression on a table would result in the message "System field 'Is TombStone' cannot be removed".
- 16825: Data lineage tracing between a data warehouse view and a semantic model did not work. The semantic model did not track lineage through a mapped custom view.
- 17687: TimeXtender would crash when using the Deploy and Execute hotkey on views based on SQL snippets
- 16704: Using Select Columns to remove columns from query tables would fail on execution when transferring from an ODX on Azure Data Lake Storage.
- 17653: The Edit Data Area dialog would allow more than 15 characters in the area name.
- 16645: Enter didn't call search function in remap table when remapping a ODX
This has now been corrected. - 15407: Primary key validation error would remove all rows for the primary key in the valid table when using incremental load with hard deletes.
- 17148: It was not possible to change letter casing in the name of a conditional lookup field by clicking on the field and pressing the F2 "rename" keyboard shortcut.
- 17267: ODX DL to DW Azure Synapse Dedicated SQL Pool Incremental Load & hard delete results in valid table truncation. There was an issue where incremental load from the ODX using data lake as storage to a Data Warehouse using Synapse Dedicated SQL Pool would not transfer primary keys when no new data exists in the ODX, which would cause the valid table to be truncated.
- 17591: Adding both pre- and post steps on deployment for an incremental table would not redeploy the valid and incremental tables on "full load deploy"
- 16836: Trying to send a test mail in Notifications on Critical Errors would throw an error instead of sending an e-mail.
- 16729: Reconnecting to an Azure service in TimeXtender would fail after 12 hours without prior activity to the Azure service.
- 15995: Data lineage was missing information when a default relation was used instead of a join on a conditional lookup field.
- 17359: You would see an error message when testing a mail notification in Notifications on Critical Errors if the server returned "2.6.0 Queued mail for delivery" which isn't actually an error.
- 17115: SMTP authentication without a password did not work.