TimeXtender 6590.1

Related products: TimeXtender Desktop TimeXtender Portal
TimeXtender 6590.1

It's officially spring in the northern hemisphere and incidentally, we have a bouquet of features and improvements ready for you in TimeXtender 6590.1.

New

  • ODX on OneLake: We are excited to introduce Microsoft Fabric OneLake as ODX storage on our platform. This enhancement enables users to seamlessly harness OneLake for all ODX operations, from initial setup and configuration in the Portal to comprehensive integration within TimeXtender workflows. This is the first of many planned integrations with Microsoft Fabric, so stay tuned! Note, however, that you currently cannot use OneLake as your ODX storage if you use Snowflake as your data warehouse storage. 
  • New data source provider for OneLake: A new data source provider for seamless and efficient ingestion of delta parquet tables from Microsoft Fabric OneLake, directly into your preferred storage solution via the TimeXtender ODX. Optimized for ingesting data from OneLake, this feature is not intended for use with Fabric OneLake ODX storage.
  • Publish data as a REST API endpoint: Added a new semantic endpoint, REST API, that works together with a server component installed on-premises or on a virtual machine in the cloud to publish data through REST API endpoints. As getting data through a REST API is a very common use case, the new endpoint type opens up a host of opportunities for integrating TimeXtender with other tools.
    In our previous major release, 6505, we introduced a new REST data source provider. This means that you can now publish and ingest data from your TimeXtender solution through a REST API using first-party components.
  • New and improved data source providers for Hubspot and Exact OnlineThe new providers dramatically improve upon the previous CData options with enhanced usability and performance. These new options allow you to easily add custom endpoints and flatten complex tables. To upgrade to the new connectors today just search for the "TimeXtender Hubspot" or "TimeXtender Exact" when adding a new data source connection. Then, in ODX, you can edit an existing data source configuration and change to the new TimeXtender data source connection. Read more about editing data sources

Improved

  • You can now have multiple data warehouse instances open at the same time in Desktop.
  • We've reshuffled the shortcut menu on Data Sources in ODX instances. "Add Data Source" now redirects to the 'Add data source connection' page in the Portal, while the previous "Add Data Source" functionality is now "Map Existing Connection". The intention is to make it clearer that adding a brand new data source happens in the Portal, while "adding" a data source in Desktop is using one of the data source connections mapped to the instance in the Portal. 
  • We've upgraded a lot of our underlying UI framework. You might notice a few changes and improvements to the Portal UI as a result of that.
  • When adding firewall rules to instances, your own IP is now automatically suggested

Fixed (Portal)

  • Fixed an issue where data source connection categories would not be shown if the category was assigned a null value 
  • Fixed an issue where MDW/SSL transfers could lead to errors
  • Cloning TimeXtender REST data sources could lead to incorrect data on password fields
  • ODX connection timeout value did not get set correctly
  • Changes to users would not get propagated to identity provider
  • Inputs did not get correctly disabled on SSL Qlik endpoint form
  • Disabled fields would show up as required on the ODX form
  • Fixed issue where the SSL form would sometimes incorrectly flag inputs as invalid, thereby making it impossible to save
  • Fixed incorrect short name suggestion on data source mapping

Fixed (Desktop)

  • Default settings for key stores were not remembered in the key generation menu for data warehouse instances using SQL Synapse or Snowflake.
  • The parameter '%Instance%' was not working for email notifications.
  • The CSV endpoint was not escaping the text qualifier.
  • On-demand execution from ODX to data warehouse would fail when having the same ODX table in a data area multiple times while using ADF to transfer from an ODX ADLS2 storage.
  • ODX to data warehouse transfer would fail when using ADF and having the same table mapped multiple times, but using different columns in each table.
  • When using a custom field as parameter in a custom measure or calculation group item in a semantic model, the parameter would disappear after closing and opening the semantic model instance and editing the custom measure or calculation group item.
  • Cleaning up mapped objects between an ODX and data warehouse would never clean up the instance mapping causing copy instance in the Portal to always request a remapping of an instance even though it was no longer used.
  • Fixed issue in the semantic model repository upgrade scripts where calculation group and custom script tables were not marked as tables to be included in instance copying
  •  Fixed an issue where jobs completed with errors or warnings were incorrectly displayed as "Completed". They are now accurately labeled as "Completed With Errors or Warnings.  
  • Custom Data on a data warehouse table did not rename the custom data column when the corresponding field was renamed.
  • The data warehouse field validation type 'Is Empty' would mark empty string values as invalid.
  • Fixed an issue where raw-only fields were included in the Data Movement pane.
  • Fixed an issue where preview in Filter Rows in the ODX would always select the first table.
  • Fixed issue where incremental transfer from a ADLS2 ODX with 'Limit memory use' setting failed for empty tables.
  • Fixed issue where transfers from ADLS2 ODX to SQL Synapse would fail if the source table contained reserved system field names.