It's winter here in the northern hemisphere, so of course our new release of TimeXtender Data Integration, includes improvements for our Snowflake integration. AI, however, is hot as a summer day, and we're happy to introduce the TimeXtender MCP server endpoint for connecting your data to AI agents in a simple and secure way. And those are just two of the cool new features in the new release – dive into the full list below.
We’ve updated the initial release (and bumped the version from 7256.1 to 7257.1) fixing two issues, one preventing the Execution Service from starting. See the details under Fixed.
New
Connect AI and data with the TimeXtender MCP Server Deliver endpoint (Preview)
The new TimeXtender MCP Server endpoint in the Deliver layer lets you plug tools like ChatGPT and Claude directly into your governed, business-ready semantic models, so AI works in your real business language instead of guessing from raw database schemas. You get faster, more accurate answers because the AI sees clear entities, relationships, and measures such as “Customer Name” and “Revenue YTD,” not cryptic table and column names.
Compared to generic MCPs that probe databases blindly, this server exposes your semantic layer with AI-ready data, read-only least-privilege access and enterprise authentication, improving reliability and security for AI-driven insights you can trust.
This feature is in preview - visit Get Early Access if you want to try it out.
Snowflake as a new storage option in Ingest
We’ve added support for native Snowflake storage in the Ingest instance. Previously, you had to land data in Azure Data Lake before moving to Snowflake in the Prepare layer. Now, you can cut complexity and cloud costs by running a Snowflake‑only architecture.
By loading into Snowflake using its native staging and COPY INTO patterns, you follow Snowflake best practices while still benefitting from TimeXtender’s incremental load logic, helping you keep Snowflake usage and execution times under control.
Snowflake improvements for Prepare storage
We’re adding support for the remaining core TimeXtender features for Prepare instances on Snowflake storage, including custom data, custom hash fields, pre- and postscripts, related records, checkpoints, hierarchy tables and integrate existing objects. One notable exception is object level security and related features, which will be included in an upcoming release.
Previously, you had to use a Microsoft SQL variant as storage if you needed these capabilities, since Snowflake storage only supported features on the simple mode level. Now, Snowflake and SQL support roughly the same features, so you can move projects across storage technologies more easily and still follow TimeXtender best practices regardless of which platform you choose.
Select and transfer data between Prepare instances on SQL
You can now select and transfer data between Prepare instances, so you can use one Prepare instance as a data source for another in the same way you already do with Ingest. This is currently supported for Prepare instances using SQL storage.
Previously, you were limited to pulling data into Prepare only from Ingest instances, which made it impossible to build more than three logical layers, reuse an external Prepare-based dataset across projects, or design hybrid setups. Now you can drag and drop tables between Prepare instances and combine data from multiple Prepare and Ingest sources into the same table, giving you much more flexibility to design layered, reusable solutions without manual workarounds.
Qlik Cloud Deliver endpoint
We’ve added a new Deliver endpoint for Qlik Cloud, so you can push TimeXtender semantic models directly into Qlik’s SaaS platform instead of only targeting Qlik Sense Enterprise.
Previously, you had to rely on workarounds and custom tricks to connect TimeXtender to Qlik Cloud, which added friction, complexity, and extra maintenance for you and your partners. Now you can configure Qlik Cloud as a first‑class endpoint, reuse your existing Qlik skills and apps, and keep TimeXtender as your central, governed data and semantic layer while still following Qlik’s recommended APIs and patterns. Note that managed workspaces are currently not supported.
Support for SQL Server 2025
TDI now supports Microsoft SQL Server 2025, which was released in November 2025, as storage.
Improved
Creating new instances is now much faster
We have brought the time it takes to create a new instance down to 3-5 seconds - in most cases - by pre-provisioning resources. We hope you’ll enjoy the time saved – we sure do!
Decide when data source providers are updated
You can now control when data source providers are updated in Ingest instances. Previously, the system automatically updated all data sources to the latest compatible version when you installed a new version of TDI. This could introduce unexpected behavior, breaking changes, or even downtime in your production environments.
Now you decide when and where to update data source providers: you can turn off all automatic updates, keep the existing “always update” behavior, or selectively enable automatic updates per data source.
More efficient primary key storage for incremental loads on SQL
This change reduces duplicate primary key data for incremental loads in SQL, improving storage efficiency. Each primary key row now has a validity range instead of being repeated for every batch. Existing tables are upgraded automatically, and both the old and new structures continue to work when transferring data from Ingest to Prepare.
User experience improvements
In addition to the bigger item, we’ve also included a few smaller improvements to the user experience. The Query Tool window has an updated layout with a word wrap option. And, building on the work done in our last major version, data lineage performance has been improved. In addition to that, the SQL Server Cleanup tool can now resolve the name of other instances in the storage that would previously be shown as ‘Unknown TX object’.
Deprecated
Dedicated SQL Pool (formerly SQL DW) deprecated as Prepare storage
We’ve supported Dedicated SQL Pool as data warehouse/Prepare instance storage since 2019, but we’ve seen very little usage. For that reason, we’ve taken the decision to deprecate support for it so that we can focus our effort on more promising storage technologies.
Fixed
TDI Portal
- Users would sometimes accidentally be logged out of the TDI portal.
- The SQL connection string in the additional parameters for SQL 2022 and Dedicated SQL Pool was not correctly validated in the TDI portal during save.
- The company details would occasionally show an error instead of company address data.
- Fixed a cosmetic issue where the secret field for Azure Data Lake Storage was unintentionally set to blank after saving.
TimeXtender Data Integration
- In Prepare instances using Fabric storage, incremental load with deletes would, in some cases, return duplicate records.
- It was possible to create and deploy two tables with the same name and schema - the last table to be deployed would just “win”. Now, this will result in a validation error on deployment.
- The Query Tool would show datetime values as date.
- In some cases, the proxy setting was not applied when making API requests to the TimeXtender web services.
- It wasn’t possible to generate documentation on Ingest instances
- We’ve fixed a bunch of issues that would pop up when using Snowflake Prepare instance storage:
- Dragging a table to the Views node to create a view would create a view where the FROM statement was empty.
- Conditional lookups would return incorrect values when the ‘Merge conditional lookups’ option was set to the default ‘Merge all if possible (fastest)‘.
- Data selection rules didn’t work.
- The application would attempt to cast NULL to datatime2, which is not a valid data type in Snowflake.
- (v. 7257.1) Running a scheduled TDI execution package in Orchestration could cause the error “The process is unresponsive. Failed to read from process after retrying 3 times,” especially when many executions ran simultaneously. We have fixed an issue in the communication between the Execution Service and the main TDI application to resolve this.
- (v. 7257.1) An issue with the Execution Service configuration file prevented the service from starting because it couldn't load a required DLL.