Right in time for Xtend 2026, we're ready with a public preview of our second major release of the year.
Microsoft Fabric Warehouse joins the Prepare storage lineup as a Public Preview, our MCP server now allows you to access all your data on any storage through just one connection, and the file-based connectors have been rebuilt around a streaming architecture so multi-gigabyte ingests no longer blow past memory.
This is just some of the exciting news in this release - dive in below!
This release is a Public Preview. This means that it is available for all partners and customers to try, but it is still being stabilized for general availability (“GA”). We encourage you to use it in sandbox and development environments, not production.
The issues found in the Public Preview will be fixed in the upcoming GA release or a later release, depending on the priority. We will not issue hotfixes for a release in Public Preview.
We’ve not replaced the download links for the current production version - use these links below to download the preview release:
New & Improved
Fabric Warehouse as Prepare Instance Storage (Public Preview)
You can now use Fabric Warehouse as Prepare instance storage, a perfect pairing with Ingest instances using Fabric Lakehouse storage. We've stamped this initial release a Public Preview as it ships with features on the simple mode level. In addtion to that, it only works with Ingest instances on Fabric Lakehouse storage in the same Fabric Warehouse.
Cloud-Connected MCP Server, now with Snowflake (Public Preview)
On-prem MCP servers can now register with TimeXtender Data Platform and expose secure endpoints to cloud-hosted AI tools without inbound firewall changes. The Configurator has a "TimeXtender Cloud" tab for one-click sign-in, registration, and unregistration; TDP gains a Settings page to manage registered servers and assign them to workspaces. Under the hood, an outbound SignalR relay handles the routing, so your data and credentials never leave your environment.
Snowflake also joins SQL Server and Fabric as a supported database in the MCP Configurator.
Full View functionality on Fabric Lakehouse through Persisted ("Materialized") Views
Our persisted views feature is now supported on Prepare instances using Fabric Lakehouse storage. The feature, also known by its Fabric-native "materialized views" name, allows you to reuse computed results for a performance benefit. More importantly, however, persisted views do not have the same limitations as regular views on Lakehouse storage. They can be used just like you use views in Prepare instances on SQL storage, e.g., in table inserts. This is enabled by the fact that persisted views are stored as delta parquet files in the workspace, while regular views are only available through the SQL endpoint.
Streaming for Large File Data Sources
Loading large files used to mean memory crashes, multi-hour runs or splitting files just to get them through. We've reworked how our file connectors handle data, so size is no longer the blocker it used to be.
-
Parquet files in the tens of gigabytes now load in a fraction of the time, what previously took most of the day can finish in under an hour.
-
CSV files at multi-gigabyte scale load reliably, without the out-of-memory errors that used to stop them.
-
XML and JSON files load steadily regardless of size, so large exports and daily file drops no longer prolong your runs.
If you've been splitting files or scheduling around these limits, you can stop.
Direct Configuration in Metadata Manager
Setting up a data source just got faster. You can now override auto-detected data types directly in the Metadata Manager, no more XSLT workarounds. Marking primary keys is now a checkbox in the same view, so you can review fields and configure keys side by side. And once you've configured one source, you can export those settings and import them into another source of the same type, instead of redoing the work each time.
Keep your Workspace Tidy with the Storage Cleanup Tool for Fabric Lakehouse
The storage clean-up tool now supports Fabric Lakehouse and helps you remove outdated TimeXtender-generated files from your Lakehouse so you can stear clear of the Fabric warehouse item limit. As a further improvement to keeping your workspaces tidy, TimeXtender notebooks are now stored in a dedicated 'TimeXtender' workspace folder per default.
Snowflake on AWS as Instance Storage
Snowflake on AWS is now a supported Ingest and Prepare instance storage option, alongside Snowflake on Azure. If your Snowflake account runs on AWS, you can keep your data in-region instead of crossing clouds.
Object-Level Security for Snowflake Prepare
Prepare instances on Snowflake storage now support roles and table-level access, closing the last significant feature-parity gap with SQL Server. Define security roles, assign access rights to specific tables, and let Snowflake enforce them.
Qlik Cloud Spaces Deployment
Deploy directly to Qlik Cloud Shared Spaces from TDI. Pick the target space on the connection in the Portal, and apps land in the right space without manual moves. In addition to that, we've added a few other fixes. The Qlik application dropdown is now sorted alphabetically, and "Deploy to text file" settings are no longer shown when the target is Qlik Cloud, where they don't apply.
Hardened MongoDB Enhanced Provider
We've cleaned up several long-standing issues with the MongoDB Enhanced Provider. The _id field is now correctly recognized as the primary key on every load, and tables containing Decimal128 or BigInt values import cleanly instead of breaking the transfer on edge-case values.
Hardened MySQL Enhanced Provider
The MySQL Enhanced Provider has had a similar pass. SSL/TLS connections now work end-to-end, including against MariaDB with Verify CA and Verify Full modes. Query Table metadata now identifies .NET types correctly, removing a class of warnings during sync. And tables with LONGTEXT columns no longer fail with an Int32 overflow - they import like any other text column.
Larger Default Window Sizes for Tasks that Require Extra Space
A handful of windows in the application have "grown up" to open in a larger default size. This includes the default scripting editor used for custom views, stored procedures, etc., Execution Log Overview, Preview, Query Tool, Add/Edit Custom Measure, Add/Edit Custom Field, and Add/Edit Execution Package. In addition to that, the Custom Measure and Custom Field windows now have a maximize button. We how these small changes will save you a few resizes a day.
Fixed
Fabric Lakehouse as Prepare Instance Storage
-
Conditional lookups would fail if it used object names with spaces.
-
Data type casting in lookups handles all supported types correctly.
-
Lookups against tables with duplicate rows no longer raise primary key violations.
-
Transformations that reference fields with their own transformations now resolve correctly.
-
Concurrent Prepare transfers from an Ingest Lakehouse no longer collide.
-
Default table column value transformations did not work.
-
The 'Source Table' column in tables with mapping sets would show "." instead of the actual table name.
-
Building the object cache required more API calls than necessary, leading to slower performance.
-
Notebook views were not included in data lineage for Fabric Lakehouse
-
SQL Custom scripts were not marked as unsupported when switching from SQL to Fabric Lakehouse storage
Data Sources & Ingest
-
Business Central ingests no longer produce duplicated fields in the Dimension Set Entry table.
-
Business Central token-handling overhead has been removed, improving throughput.
-
Business Central tables with 100+ fields no longer error during ingest.
-
SQL Data Source connections to Synapse Dedicated Pool succeed reliably.
-
The AX adapter no longer drops accounts in the 010/030 range due to filter logic.
-
Enhanced CSV no longer fails when a row is incomplete.
-
Azure Blob ingest handles empty and root paths correctly.
-
Parquet handling is corrected for special decimal and double cases.
-
Preview Table works on Enhanced Data Sources.
-
Provider v24 test connections succeed after upgrade.
-
Data source auto-update settings persist as configured.
-
Ingest instances no longer time out when connecting to data sources.
-
Business Central Online ingests no longer hang after metadata changes on the source.
-
The JSON & XML connector no longer fails on aggregated JSON transfers.
-
Parquet preview now works for Ingest instances using Fabric storage.
Snowflake as Instance Storage
-
Incremental load from Ingest storage didn't work for tables with mapping sets.
-
Columns with the 'Number(38,0)' data type no longer trigger format exceptions when loading data from a data source into Ingest storage.
Qlik Cloud Deliver Endpoint
-
Taking ownership of an existing Qlik Cloud app would fail which prevented visuals, bookmarks, master measures, and additional scripts from being retained when migrating from older versions.
-
Qlik Cloud endpoints now respect the configured QVD folder instead of writing to a default location.
Other
-
Incremental loads correctly process primary key updates and deletes.
-
Incremental subtraction supports decimal types.
-
Deleting large data areas from a Prepare instance would crash the desktop application or cause it to hang. This is now handled properly with a loading window showing the progress.
-
In the Execution Service Configuration tool, it was possible to accidentally signed out, and the Sign-In page would throw an unhandled exception when clicking Next without signing in.
-
Vulnerabilities in XBI Server have been patched.
-
Improved the Object Dependencies window with a more consistent layout and fixed some rendering issues.