The first major release of 2025 brings new time-saving features for data source connections, an improved REST data source provider, less "simple" Fabric Prepare instance storage, and a whole lot of smaller changes. Check out the full list below!
New
Import and export data source connections
You can now import and export data source connections to use in different accounts, save for later, or share with coworkers. You can of course use import to create new data source connections, but you can also override the data of an existing connection. In case some fields cannot be mapped to the connection when you import, the mismatched fields will be listed so you can decide what to do.
Two import/export formats are supported: The connection string ("key=value;key2=value") used by the ODX in TimeXtender 20.10, and a JSON-based connection profile that includes more extensive information. No matter the format, passwords, and other sensitive fields, will always be excluded from the export.
Change data source provider
We added the option to change the provider of a data source connection. While you can change freely between providers, changing from one SQL provider to another is obviously less of a hassle than changing from SQL to a REST provider. If the old and new providers don't have exactly the same fields, you'll be able to review the mismatches and make adjustments before committing the change.
Customize code for Prepare instances on Fabric Lakehouse (Public Preview)
When you're using Microsoft Fabric Lakehouse as Prepare instance storage, you can now add inline scripts that are executed along with the Fabric Notebooks created by TimeXtender Data Integration. With this, the customize code feature is now also supported for Prepare instances on Fabric.
Improved
Improved REST data source provider
We've made a ton of improvements to the REST provider to make it more flexible and able to support more data sources. You can now import connection information that follows the OpenAPI/Swagger Specification which can be a real time-saver. The new 'Authentication endpoint' authentication type allows you to get authentication data, e.g. a token, from an endpoint and use that for authentication, e.g. by adding it to the header of all requests. We've also eliminated the requirement to select a table in TimeXtender Data Integration from all endpoints that the endpoint you actually wanted data from depended on. The new 'Endpoint query' dynamic values source lets you create dynamic values that combine data from multiple endpoints with a SQLite query. On top of that, the REST provider now supports data in CSV format in addition to JSON, XML, and plain text. Other changes included the following:
- New option on endpoints to set a delay before the first request to the endpoint as well as a delay between requests.
- Global values, i.e. variables that can be set once and used across endpoints.
- Dynamic values created by pagination are now available outside of pagination.
- New built-in dynamic values: TX_ExecutionTimestamp, TX_XmlFileName, TX_TableFlatteningFileName.
- Support for OAuth response with 'accessToken' instead of 'access_token'.
- Support for a custom header prefix for OAuth token.
- New option for setting the data format (JSON/CSV/XML/Text) explicitly instead of relying on the automatic selection logic.
- After converting from JSON to XML, tables with an ID column only are now removed from the metadata result.
- The endpoint path can now be overridden by a complete URL.
- Pagination can replace both the URL and the post body at the same time.
- New option to set empty fields to null.
- New option to enable debug logging.
Prepare instance on Fabric Lakehouse no longer "simple" (Public Preview)
When we added support for Microsoft Fabric Lakehouse as Prepare instance storage in our last major release, functionally was limited to what's possible with simple mode enable. With this release, that's no longer the case. We've added support for transformations, conditional lookup fields, supernatural keys, and aggregate tables.
However, we still have some work to do since the following features are still not supported: History, custom data, custom views, junk dimensions, related records, table inserts, and hierarchy tables. While incremental load is supported, it is currently necessary to manually define an incremental selection rule on the Prepare instance. Incremental settings from the source are not automatically applied.
Remember instance migration settings
You can now save your settings when migrating an instance between environments, making it easier to reuse them later. Saved settings will automatically apply when you migrate the same instance again, but can be modified as needed before migrating.
Fixed (TDI Portal)
- Fixed more than a dozen smaller issues and inconsistencies in the look and feel of various tables in the Portal to create a more streamlined and user-friendly experience.
- Fixed an issue where right-click was disabled in tables even when there was no custom menu defined, preventing users from, e.g., opening an instance in a new tab.
- Fixed an issue with changing Prepare storage type.
- Values could visually overlap on the instance overview page.
- Fixed an issue where uncategorized instances could not be added when creating a new environment.
- Fixed an issue where timeouts were treated as shorts instead of integers on some instance pages.
- The Data Source Mappings section would shift when opening a dropdown.
- On the Instances page, the Uncategorized Instances section would be displayed even when empty.
- The Edit Environment modal couldn't handle long instance names.
- When adding a data source connection, sometimes an error message would only pop up after you had been redirected away from the page.
- Unclear wording in successful update message on data source connections.
- Removed ":" from data source connection checkboxes with no description.
- Fixed an issue where customers could not change basic info for their organization.
- In the activity log, some values related to firewall rules and restoring an instance would not be displayed correctly.
Fixed (TimeXtender Data Integration)
- Fixed reference to old instance type names in the job log.
- Fixed issue where editing a snippet-based script action could cause a "Label not found" error
- Fixed an issue where you could add an execution package with failure handling set to retry steps while having multiple threads enabled and managed execution disabled.
- Fixed an issue in Snowflake where the deployment of a table would fail if the raw schema was different from the error/warning schema.
- Fixed a null reference exception when deleting a table referenced in a custom data selection rule across multiple data areas.
- Fixed an issue where the Metadata Manager did not pick up changes when tables and columns match on an identity other than name.
- Fixed an issue when execution of a table in Ingest when the storage type is Fabric Lakehouse with schema and the source of the Deliver table is TimeXtender OneLake Finance & Operations Data Source would fail with "name 'PATH' is not defined" and/or "name 'FULL_LOAD' is not defined"