Skip to main content
  • 118 Product updates
TimeXtender Data Enrichment 26.1
TimeXtender Orchestration & Data Quality 26.1

TimeXtender Orchestration & Data Quality 26.1

Today, we’ve started the rollout of the next version of TimeXtender Orchestration & Data Quality that contains the changes listed below. New and improved Generate Rules with XPilot (Preview) Data Quality now includes a preview feature where XPilot can analyze a dataset and suggest data quality rules.  This helps reduce the time spent creating baseline controls and speeds up delivery of AI-ready data to downstream users and processes.  To get started, open “Generate Rules with XPilot” in Data Quality. Fabric Optimization added to Azure Cloud Optimization package type Azure Cloud Optimizer packages now support pausing and resuming, and scaling Fabric capacities. This makes it easier to match capacity to workload timing and manage cost more predictably. See details in “Configuring Azure Cloud Optimizer”. Azure App Registration as Authentication method SQL and SSAS endpoints now support Azure App Registration as an authentication method, in addition to SQL logins and (for on-premises) Windows authentication.  This adds an option that can be easier to govern and automate in cloud environments. Orchestration Error Insights with XPilot [Preview] Orchestration & Data Quality now includes a preview of XPilot Error Insights to help diagnose failures faster.  This is useful for common runtime issues such as authentication failures, network problems, and schema changes where log review can take time. See more information in XPilot Error Insights And much more In addition to the headliners, the release also includes lots of smaller improvements:  Executions in Turnkey/TDP: View running tasks in a workspace, stop one or more tasks, and use Flush Queue to clear stale executions.  Time zone support in Turnkey/TDP: Timestamps now display in the configured time zone across key areas including datasets, rules, exceptions, process maps, executions, and settings.   Completely revamped User Management in Turnkey/TDP:   User Management for cloud services is now fully handled in the Timextender Data Platform (TDP), and the user administration UI in the desktop client will be disabled. Users and their privileges from Desktop and TDP will be merged and synced.  All tasks such as creating users, assigning roles, and adjusting access must be performed in TDP, and changes will then synchronize to the desktop environment where applicable.  User group management remains available in the desktop client and continues to work as before, but group‑based privileges do not affect access or behavior in the Timextender Data Platform (TDP) yet.  This only applies for cloud customers.  Systems synced to workspaces – only applies to cloud customers  From this release, all existing systems are automatically synchronized to workspaces in the Timextender Data Platform (TDP).  Cloud customers must now manage all workspaces exclusively through TDP, since workspace creation and changes are no longer supported in the legacy desktop experience for cloud  Workspace owners changed to notification users  Existing workspace owners are preserved and updated so they continue to receive notifications and now also gain explicit administrator access to their workspaces.  All users who are currently set as workspace owners will automatically become Administrators on the same workspaces in the Timextender Data Platform (TDP).  The “workspace owner” label is renamed to notification user, which is the user who receives email notifications related to that workspace (for example, execution or exception notifications, depending on configuration).  Improved  Updated graphics for Installer, icons and splash screen to follow the new look.  On portal there is a banner guiding users to use TimeXtender Data Platform  (Turnkey) with link.  Process title text and position is more uniform across Desktop, Portal and Turnkey.  Clarified when executing an empty process that it contained no active task and therefore there was nothing to do.  Improved filtering of invalid characters from column names entered by users.  Improved error messages in execution log for various different problems.  Fixed  Removed the "work in progress" panel from the home page in Turnkey.  Users were automatically directed to a disabled module in the Portal when trying to access a specific system  Clicking Open in Portal as User in the desktop client sometimes opened TDP instead of the Portal  Data Transfer package could not be opened if the Execution Connection had been deleted  Deleted email templates were visible in Turnkey/TDP  Exceptions links were not updated to point to the new domain odq.timextender.com instead of exmon.com  Improved message that appears when an older version of the desktop client is used to open a service that has been upgraded to a newer version.  Corrected text when removing Data Provider in Turnkey.  Fixed an issue with Azure Cloud Optimizer package type not deallocating Virtual Machines when stopping them. 

Related products:TimeXtender Orchestration & Data Quality

Data source Providers r. 2026-01-28

Today, we’ve released updated data source providers along with the new release of TimeXtender Data Integration. See the changes below.  Business Central 365 Version: 24.0.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  Business Central 365 Option Values Version: 24.0.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  CSV Version: 24.3.5.0 (TDI) / 1.9.0 (20.10 BU) / 16.4.23.0 (20.10 ODX) Added support for running on Ingest configured for Snowflake.  Added support for incremental load. Dynamics 365 Business Central – Online Version: 24.1.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  Dynamics 365 Business Central – SQL Server Version: 24.0.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  Dynamics 365 Finance – SQL Server Version: 24.1.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  Exact Online Version: 12.1.0.0 (TDI) Added support for running on Ingest configured for Snowflake.  Fixed a bug where parsing data type could fail when reading. Excel Version: 24.3.2.0 (TDI) / 1.10.0 (20.10 BU) / 16.4.22.0 (20.10 ODX) Added support for running on Ingest configured for Snowflake.  Added incremental load support.  Fixed a bug where culture was not applied properly in some cased when reading data. Hubspot Version: 12.1.0.0 (TDI) Added support for running on Ingest configured for Snowflake.  Fixed a bug where parsing data type could fail when reading. Infor SunSystems Version: 24.0.0.0 (TDI) Added support for running on Ingest configured for Snowflake. Fix query table/tool to use a default underlying table.  Removed a wrong validation of Template business unit setting so that it can be left empty MongoDB Version: 24.2.0.0 (TDI) / 1.1.0 (20.10 BU) / 16.4.2.0 (20.10 ODX)  New TimeXtender Enhanced data source provider  MySQL Version: 24.1.1.0 (TDI) / 1.0.0 (20.10 BU) / 16.4.0.0 (20.10 ODX)  New TimeXtender Enhanced data source provider  Navision Option Values Version: 24.0.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  ODATA Version: 12.1.0.0 (TDI) / 1.4.1 (20.10 BU) / 16.4.9.0 (20.10 ODX) Added support for running on Ingest configured for Snowflake.  Fixed a bug where parsing data type could fail when reading. OneLake Delta Parquet Version: 24.0.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  OneLake Finance & Operations Version: 24.1.0.0 (TDI)  Added support for running on Ingest configured for Snowflake.  Oracle Version: 24.1.4.0 (TDI)  Added support for running on Ingest configured for Snowflake.  Parquet Version: 24.1.1.0 (TDI) / 16.4.13.0 (20.10 ODX) Added support for running on Ingest configured for Snowflake.  Fixed bug where no rows in source could lead to error. REST Version: 12.1.0.0 (TDI) / 1.10.1 (20.10 BU) / 16.4.23.0 (20.10 ODX) Added support for running on Ingest configured for Snowflake.  Fixed a bug where parsing data type could fail when reading. Salesforce Version: 24.0.0.0 (TDI) / 1.0.0 (20.10 BU) / 16.4.0.0 (20.10 ODX)  New TimeXtender Enhanced data source provider  SQL Database Version: 24.1.3.0 (TDI)  Added support for running on Ingest configured for Snowflake. XML/JSON Version: 24.1.1.0 (TDI) / 1.8.0 (20.10 BU) / 16.4.16.0 (20.10 ODX) Added support for running on Ingest configured for Snowflake.  Added support for culture when reading metadata.  Add functionality to support dynamic values in table flattening configuration names.  

Related products:Data source providers
TimeXtender Data Integration  7257.1

TimeXtender Data Integration 7257.1

It's winter here in the northern hemisphere, so of course our new release of TimeXtender Data Integration, includes improvements for our Snowflake integration. AI, however, is hot as a summer day, and we're happy to introduce the TimeXtender MCP server endpoint for connecting your data to AI agents in a simple and secure way. And those are just two of the cool new features in the new release – dive into the full list below.We’ve updated the initial release (and bumped the version from 7256.1 to 7257.1) fixing two issues, one preventing the Execution Service from starting. See the details under Fixed.NewConnect AI and data with the TimeXtender MCP Server Deliver endpoint (Preview)The new TimeXtender MCP Server endpoint in the Deliver layer lets you plug tools like ChatGPT and Claude directly into your governed, business-ready semantic models, so AI works in your real business language instead of guessing from raw database schemas. You get faster, more accurate answers because the AI sees clear entities, relationships, and measures such as “Customer Name” and “Revenue YTD,” not cryptic table and column names.Compared to generic MCPs that probe databases blindly, this server exposes your semantic layer with AI-ready data, read-only least-privilege access and enterprise authentication, improving reliability and security for AI-driven insights you can trust.This feature is in preview - visit Get Early Access if you want to try it out.Snowflake as a new storage option in IngestWe’ve added support for native Snowflake storage in the Ingest instance. Previously, you had to land data in Azure Data Lake before moving to Snowflake in the Prepare layer. Now, you can cut complexity and cloud costs by running a Snowflake‑only architecture.By loading into Snowflake using its native staging and COPY INTO patterns, you follow Snowflake best practices while still benefitting from TimeXtender’s incremental load logic, helping you keep Snowflake usage and execution times under control.Snowflake improvements for Prepare storageWe’re adding support for the remaining core TimeXtender features for Prepare instances on Snowflake storage, including custom data, custom hash fields, pre- and postscripts, related records, checkpoints, hierarchy tables and integrate existing objects. One notable exception is object level security and related features, which will be included in an upcoming release.Previously, you had to use a Microsoft SQL variant as storage if you needed these capabilities, since Snowflake storage only supported features on the simple mode level. Now, Snowflake and SQL support roughly the same features, so you can move projects across storage technologies more easily and still follow TimeXtender best practices regardless of which platform you choose.Select and transfer data between Prepare instances on SQLYou can now select and transfer data between Prepare instances, so you can use one Prepare instance as a data source for another in the same way you already do with Ingest. This is currently supported for Prepare instances using SQL storage.Previously, you were limited to pulling data into Prepare only from Ingest instances, which made it impossible to build more than three logical layers, reuse an external Prepare-based dataset across projects, or design hybrid setups. Now you can drag and drop tables between Prepare instances and combine data from multiple Prepare and Ingest sources into the same table, giving you much more flexibility to design layered, reusable solutions without manual workarounds.Qlik Cloud Deliver endpointWe’ve added a new Deliver endpoint for Qlik Cloud, so you can push TimeXtender semantic models directly into Qlik’s SaaS platform instead of only targeting Qlik Sense Enterprise.Previously, you had to rely on workarounds and custom tricks to connect TimeXtender to Qlik Cloud, which added friction, complexity, and extra maintenance for you and your partners. Now you can configure Qlik Cloud as a first‑class endpoint, reuse your existing Qlik skills and apps, and keep TimeXtender as your central, governed data and semantic layer while still following Qlik’s recommended APIs and patterns. Note that managed workspaces are currently not supported.Support for SQL Server 2025TDI now supports Microsoft SQL Server 2025, which was released in November 2025, as storage.ImprovedCreating new instances is now much fasterWe have brought the time it takes to create a new instance down to 3-5 seconds - in most cases - by pre-provisioning resources. We hope you’ll enjoy the time saved – we sure do!Decide when data source providers are updatedYou can now control when data source providers are updated in Ingest instances. Previously, the system automatically updated all data sources to the latest compatible version when you installed a new version of TDI. This could introduce unexpected behavior, breaking changes, or even downtime in your production environments.Now you decide when and where to update data source providers: you can turn off all automatic updates, keep the existing “always update” behavior, or selectively enable automatic updates per data source.More efficient primary key storage for incremental loads on SQLThis change reduces duplicate primary key data for incremental loads in SQL, improving storage efficiency. Each primary key row now has a validity range instead of being repeated for every batch. Existing tables are upgraded automatically, and both the old and new structures continue to work when transferring data from Ingest to Prepare.User experience improvementsIn addition to the bigger item, we’ve also included a few smaller improvements to the user experience. The Query Tool window has an updated layout with a word wrap option. And, building on the work done in our last major version, data lineage performance has been improved. In addition to that, the SQL Server Cleanup tool can now resolve the name of other instances in the storage that would previously be shown as ‘Unknown TX object’.DeprecatedDedicated SQL Pool (formerly SQL DW) deprecated as Prepare storageWe’ve supported Dedicated SQL Pool as data warehouse/Prepare instance storage since 2019, but we’ve seen very little usage. For that reason, we’ve taken the decision to deprecate support for it so that we can focus our effort on more promising storage technologies.FixedTDI PortalUsers would sometimes accidentally be logged out of the TDI portal. The SQL connection string in the additional parameters for SQL 2022 and Dedicated SQL Pool was not correctly validated in the TDI portal during save. The company details would occasionally show an error instead of company address data. Fixed a cosmetic issue where the secret field for Azure Data Lake Storage was unintentionally set to blank after saving.TimeXtender Data IntegrationIn Prepare instances using Fabric storage, incremental load with deletes would, in some cases, return duplicate records. It was possible to create and deploy two tables with the same name and schema - the last table to be deployed would just “win”. Now, this will result in a validation error on deployment. The Query Tool would show datetime values as date. In some cases, the proxy setting was not applied when making API requests to the TimeXtender web services. It wasn’t possible to generate documentation on Ingest instances We’ve fixed a bunch of issues that would pop up when using Snowflake Prepare instance storage: Dragging a table to the Views node to create a view would create a view where the FROM statement was empty. Conditional lookups would return incorrect values when the ‘Merge conditional lookups’ option was set to the default ‘Merge all if possible (fastest)‘. Data selection rules didn’t work. The application would attempt to cast NULL to datatime2, which is not a valid data type in Snowflake. (v. 7257.1) Running a scheduled TDI execution package in Orchestration could cause the error “The process is unresponsive. Failed to read from process after retrying 3 times,” especially when many executions ran simultaneously. We have fixed an issue in the communication between the Execution Service and the main TDI application to resolve this. (v. 7257.1) An issue with the Execution Service configuration file prevented the service from starting because it couldn't load a required DLL.

Related products:TimeXtender Data IntegrationTimeXtender Data Integration Portal

TimeXtender Data Integration 7150.1 - 7158.1

We’ve published three minor releases of TimeXtender Data Integration as follow-ups to the last major release. To give you a better overview of the changes included if you upgrade today, we’ve listed them all below. We recommend that you upgrade if you’re affected by any of the issues that have been fixed.7158.122 Oct 2025Fixed an issue where the direct read stored procedure would be wrong when the source mapping is a view.7157.121 Oct 2025On Fabric Lakehouse Prepare instance storage, the execution timeout per cell was fixed at 600 seconds which was too short in some cases. You can now configure the cell execution timeout by changing the command timeout for the storage in the Portal. Fixed a long-standing issue present since the initial release, where records with negative DW_Id values (e.g., -999, -998) were not transferred correctly between tables. This fix ensures all valid rows, including those with negative identifiers, are now properly processed during data warehouse transfers. 7150.115 Oct 2025Fixed an issue where data of the Sql Decimal type containing null values would fail for transfers to the Ingest Storage when using Parquet files as the output.  Fixed an issue where Selection Rules didn't work as expected on a Prepare Lakehouse. Fixed an issue where re-deploying measures and hierarchies in Qlik would time out. Fixed an issue where deploying a Tabular model to Azure Analysis Services would fail with a message saying a dll was missing. Fixed an issue where a sub query was used to get primary keys for incremental load when sub query is disabled for query tables. Fix primary key upload during incremental load with "handle deletes" enabled for data lake.

Related products:TimeXtender Data Integration

TimeXtender Data Enrichment 25.2

Today, we’ve started the rollout of the next release of TimeXtender Data Enrichment, v. 25.2, that contains the changes listed below.NewTimeXtender Master Data Management has been renamed to TimeXtender Data Enrichment Web URLs have been updated to use “timextender” instead of “exmon”. To access the web application, users will now go to {customer}.de.timextender.com instead of {customer}.exmon.com. Our login page login.exmon.com has been replaced with login.timextender.com, and even though login.exmon.com will continue to work we encourage you to use login.timextender.com from now on.ImprovedThe overview in User Configuration now shows the email of the users Better error message when app registrations for SSO did not have ‘Group.Read.All’ permission and user tried to search for Entra Groups in user configuration. Added support for more data types when importing a table from a database.  When clicking save after opening Database User Configuration, all users get Select permission to ‘exmondm’ schema and Execute permission to ‘exmondm.GetHierarchyValue’ function. Emails use ‘TimeXtender Data Enrichment’ instead of ‘Master Data Management’. Install file has updated name ‘TimeXtenderDESetup’ Use const string variable for "[None]" and "[Fixed Value]" instead of manual written string around the codebase. Using new icons and graphics.FixedFixed a bug where the Portal skipped the "Is Required" check when doing validation. If a table had Lookup columns with custom setup they could not be displayed in the web application User could not import into a table with a Large number column in the web application Fixed a bug where importing data into a table with a "Large Number" data type did not work.  More support for Availability Group connection Small fixes to the New User UI for on premise users Saved View did not work in the web application if it was filtering on a blank item list value During Import, Primary Key has Unique Key Action, Read Only has Ignore Action, else Update Action. During Import, Action column is no longer text editable and works as drop down selector. While DE is open and network connection is lost (i.e., during Windows supension) sql connection error is logged with log4net in install-folder\Log4NetDataEnrichment.txt Fixed a few bugs affecting percentage formatted decimal columns: Min/Max value for constraints was not always saved properly, importing data into a column did not always work and comparisons sometimes used wrong values.  On Table import, the column action selection functions the same way in web and desktop and on web it’s no longer possible the edit combobox value. All previous failing unit tests For lookup column, switching from Popup Window – ‘Custom Visible Columns’ to ‘Use Display Value’ re-adds ‘Custom Display Value’ to visible columns so that combobox can display the values (and not data type). Fixed a bug where the Project, Lookup Table and Key Column values were not always saved correctly for Hierarchy Lookup Attributes.  Fixed a bug where the scrollbar was not visible when giving permissions/privileges to a user.  When logging in the Microsoft and Google icons were cropped User was unable to authorize Fivetran connection Better error message when there is an error with finding Entra Groups Adding new user/group in User Configuration while having a filter does not cause any errors. Changing column name and then changing column type from normal to item list in table designer does not cause any issues.

Related products:TimeXtender Data Enrichment
TimeXtender Data Integration 7142.1

TimeXtender Data Integration 7142.1

It's almost like you cannot say fall without Fabric. At least this October release of TimeXtender Data Integration (Desktop v. 7142.1) has Microsoft Fabric features as the headliner along with a new logging system for Ingest and a bunch of big and small fixes and improvements. Dive in below! New and improvedFabric Lakehouse on Prepare gets views, shortcut tables and much moreWith this release, we're taking a major step towards full support for Microsoft Fabric Lakehouse as Prepare instance storage. Validations, history tables, table inserts, custom data, custom hash fields and junk dimensions are now supported on Fabric.On the programmability side of things, you can now add Notebook views, a new type of custom views that are deployed on a Fabric notebook and can be referenced and used in custom scripts. Regular custom views are also supported through the Fabric storage's SQL endpoint.Utilizing the OneLake shortcut feature, the Fabric flavor of the enable/disable physical valid table option is now available. Disabling the valid table creates a shortcut table where data is loaded directly from the Ingest instance. This saves execution time and storage space, but on the flipside, the table in Prepare must be 100% identical to the source table in Ingest - no transformations allowed.As a final improvement, Fabric instances now support service principal authentication, so a non MFA-user is no longer required.Redesigned Ingest loggingWe've taken a fresh look at logging in the Ingest instance in order to make it more useful and more robust. It includes more thorough logs, more options, an improved UI and, last but not least, file-based logging.Logs are now stored in the local file system and not in the Ingest instance, which is handy when you're investigating, e.g., issues with connecting to the instance in the cloud. It also makes them immediately available if you want to use them for analysis purposes. To aid in that, we're using the open W3C Extended Log File Format for the logs.When setting up the Ingest service, you can configure the log retention in days, as well as the maximum file size of the logs to keep the logs from hoarding to much storage space. You can also select the minimum severity level for logs to the log file as well as the Windows event log.The old logging system has been deprecated, but is still available for the time being when Show Deprecated Features is enabled in the View menu.PowerBI endpoints now support Direct LakeThe Power BI endpoint in Deliver now supports Direct Lake on SQL and Direct Lake on OneLake which facilitates much faster data access. This is especially useful when you have really large amounts of data.Control access to instance blueprintsBased on partner feedback, we've updated the access controls for instance blueprints. Partners can now control which customers are granted access to specific shared blueprints. The initial implementation worked more like a "shared folder" where all blueprints were available to all customers.Note: As a partner, you must update your existing blueprints to grant access to the customers that should have access, since the new default is no access. And much moreIn addition to the headliners, the release also includes a bunch of smaller improvements:Snowflake storage now supports key-pair authentication in preparation for Snowflake's requirement for multi-factor authentication for all users that kicks in by November. In the Ingest instance, we've changed the name of the default transfer task to "Load Data" and added a second default task called "Load Data (Full Load)". Previously, the default task was called "Full Load", but would actually used incremental load if it was available. The Ingest Service Configuration can now import the configuration of a previously installed version, making upgrading a bit easier. In the Portal, we've made deleting data source connections easier. You'll now see what instances the data source is mapped to and those mappings will be deleted along with the connection. Previously, you'd have to delete all mappings before you could delete the data source connection. We've tuned the data lineage queries to get you better performance when you want to view data lineage for an object. REST data sources now makes URL encoding of query parameters optional. ODATA discovery now includes certificates (PFX/PEM) for authentication.FixedDesktopFixed freezing UI and missing feedback when generating End-to-End Tasks and Packages. Fixed an issue in the Deploy and Execute dialog which could cause out of memory exception when having deployment and/or execution steps. Fixed an issue causing an 'Unauthorized' error. Fixed an issue where very large error messages could cause TDI to not correctly communicate with Orchestration Zoom in/out should now behave correctly in the data lineage window. Fixed an issue where data lineage would find old relation and potentially make the search very large. Changed some misleading labels in the Primary Keys and Synchronize windows. Added option to disable "windowed incremental load" for ADO and OleDB data sources to make incremental load work for specific ODBC sources. Fixed an issue in the remapping dialog in the Metadata Manager where sorting the columns would cause an "Index out of range" error and close the dialog. Fixed issue with installing new data source versions in a multi threaded environment. Fixed issue where ODBC/Ansi syntax for ADO/OleDB ignores incremental subtract value. Search & replace in REST now saves empty replace values as empty strings instead of null. Fixed errors with data type conversions on data area transfers using Fabric prepare. Fixed an issue with custom transformations and transformations on lookup fields returning null on Fabric prepare. Fixed an issue with empty date tables when working with multiple schemas on Prepare instances with Fabric storage. Fixed an issue with selection rules not working after renaming a field on Prepare instances with Fabric storage. Fixed an issue where the stored procedure used for direct read between data areas could become too long. Fixed an issue where adding and deleting a table mapping in a data area could crash the application. Fixed an issue where the SQL endpoint in a Prepare instance could cause slow UI updates. Fixed an issue where renaming a table where a field is used as a lookup field on a conditional lookup field would not mark the table where the conditional lookup field belongs as modified. Fixed an issue where deploying a history table with a dot in the table name would fail. Custom selection rules in Deliver instances would loose variables when closing the project. Fixed an issue where the order of the columns from Prepare instance would change. Made sure the Add Calculation Group and Edit Column Description dialogs have a minimum height and only vertical scrollbars. Fixed an issue where the semantic endpoint would resolve the Endpoint Name parameter incorrectly on execution. Fixed an issue where Power BI endpoint deployment with RLS fails with error AD Service Principal Authentication is not supported with this SQL Server version. Fixed an issue where deploying a Prepare instance could crash the application. Fixed an issue with parsing SqlDecimal data in Parquet files in Ingest. Fixed an issue where fields on Deliver instances could not be deleted if they contained a custom data selection rule.PortalFixed a duplicate/incorrect log entry on user (contact person) deletion Fixed an issue where activity logs required a refresh to show new entries Fixed issue where users were added despite invitation failure; failures return proper error messages now. Fixed activity log order for data source creation with simultaneous ingest mapping. Standardized table design across most of the Portal. Fixed issue where Double value types were not correctly interpreted in data source connection forms, leading to a failure to save. Fixed issue where the "Use Microsoft Entra members in Ingest instance security roles" checkbox would automatically reselect itself after being deselected and saved on Ingest instances with Azure SQL storage.

Related products:TimeXtender Data IntegrationTimeXtender Data Integration Portal

Data source providers r. 2025-10-06

Today, we’ve released updated data source providers. See the changes below. Azure Data Factory - OracleVersion: 17.5.1.0 (TDI) / 10.4.5.0 (20.10 ODX)Switched to use v2 connector in ADF, since the previous one is being deprecated.Azure Data Factory - PostgreSQLVersion: 17.2.0.0 (TDI) / 10.4.5.0 (20.10 ODX)Switched to use v2 connector in ADF, since the previous one is being deprecated.CSVVersion: 23.15.4.0 (TDI) / 1.7.0 (20.10 BU) / 16.4.16.0 (20.10 ODX)Added support for handling multiple metadata URIs. Added trimming of long column names. Added culture settings when parsing numbers for metadata. Added support for FTPS location. Changed pattern matching for files to be case insensitive. Fixed issue with numbers parsed incorrectly with one decimal digit in some cases. Fixed issue with metadata URI handling for Sharepoint location.Dynamics 365 Business Central - OnlineVersion: 23.1.0.0 (TDI)Fixed an issue where the subtract value was not applied during incremental load.Exact OnlineVersion: 11.2.0.0 (TDI)Added support for incremental loading (TDI only). Added better handling of JSON edge cases. Added support for Azure App Registration with certificate. Added a fallback strategy to handle invalid characters in XML if it fails to read it. Added option to not URL encode query parameters. Added handling for non-standard Authorization header value. Fixed error message for timeout, it now shows text indicating that the request timed out. Fixed issue when data type is overridden in TDI. Fixed some scaling issues in the REST dialog for BU/ODX. Fixed issue with table builder when 'Only list flattened tables' is enabled where it would not return the correct schema.ExcelVersion: 23.16.1.0 (TDI) / 1.6.1 (20.10 BU) / 16.4.17.0 (20.10 ODX)Added support for handling multiple metadata URIs. Added trimming of long column names. Added support for FTPS location. Changed pattern matching for files to be case insensitive. Improved file aggregation logic. Fixed excel engine to allow for more files in metadata uri. Fixed issue with 'Treat Empty as Null' for table definitions. Fixed issue with metadata URI handling for Sharepoint location. Removed duplicate file aggregation pattern settings.HubspotVersion: 11.2.0.0 (TDI)Added support for incremental loading (TDI only). Added better handling of JSON edge cases. Added support for Azure App Registration with certificate. Added a fallback strategy to handle invalid characters in XML if it fails to read it. Added option to not URL encode query parameters. Added handling for non-standard Authorization header value. Fixed error message for timeout, it now shows text indicating that the request timed out. Fixed issue when data type is overridden in TDI. Fixed some scaling issues in the REST dialog for BU/ODX. Fixed issue with table builder when 'Only list flattened tables' is enabled where it would not return the correct schema.Infor SunSystemsVersion: 23.2.0.0 (TDI)Added support for the force Unicode optionODATAVersion: 11.2.0.0 (TDI) / 1.3.0 (20.10 BU) / 16.4.7.0 (20.10 ODX)Added support for certificates in metadata discovery. Added support for incremental loading (TDI only). Added better handling of JSON edge cases. Added support for Azure App Registration with certificate. Added a fallback strategy to handle invalid characters in XML if it fails to read it. Added option to not URL encode query parameters. Added handling for non-standard Authorization header value. Fixed error message for timeout, it now shows text indicating that the request timed out. Fixed issue when data type is overridden in TDI. Fixed some scaling issues in the REST dialog for BU/ODX. Fixed issue with table builder when 'Only list flattened tables' is enabled where it would not return the correct schema.OneLake Delta ParquetVersion: 23.4.0.0 (TDI)Fixed an error ingesting metadata when the Lakehouse doesn't support schemas.OneLake Finance & OperationsVersion: 23.5.0.0 (TDI)Fixed an error ingesting metadata when the Lakehouse doesn't support schemas.ParquetVersion: 23.13.1.0 (TDI) / 1.5.0 (20.10 BU) / 16.4.12.0 (20.10 ODX)Added logging of file names. Added support for handling multiple metadata URIs. Added trimming of long column names. Added support for FTPS location. Changed pattern matching for files to be case insensitive. Fixed issue with metadata URI handling for Sharepoint location.RESTVersion: 11.2.0.0 (TDI) / 1.9.0 (20.10 BU) / 16.4.21.0 (20.10 ODX)Added support for incremental loading (TDI only). Added better handling of JSON edge cases. Added support for Azure App Registration with certificate. Added a fallback strategy to handle invalid characters in XML if it fails to read it. Added option to not URL encode query parameters. Added handling for non-standard Authorization header value. Fixed error message for timeout, it now shows text indicating that the request timed out. Fixed issue when data type is overridden in TDI. Fixed some scaling issues in the REST dialog for BU/ODX. Fixed issue with table builder when 'Only list flattened tables' is enabled where it would not return the correct schema.SQL ServerVersion: 23.1.0.0 (TDI) / 1.1.0 (20.10 BU) / 16.4.2.0 (20.10 ODX)Added support for the force Unicode optionXML/JSONVersion: 23.12.0.0 (TDI) / 1.6.0 (20.10 BU) / 16.4.13.0 (20.10 ODX)Added support for FTPS location.

Related products:Data source providers

TimeXtender Orchestration & Data Quality 25.2

Today, we’ve started the rollout of the next release of TimeXtender Orchestration & Data Quality that contains the changes listed below.NewAdded support for setting the number of times a failed package should retry.  New schedule type - ‘Continuous’ - that starts a new execution right after previous execution finishesImprovedRedesigned the start page and fixed a bug where buttons/links did not always work.  Improved user experience and execution for Data Transfer merge. Added clearer error messages for the execution of Azure package types. Improved an error message shown when running an invalid Data Factory package. Removed the Subscription field from the Power BI Refresh package type UI, as it is no longer required. Web URLs have been updated to use ‘timextender’ instead of ‘exmon’ (i.e., https://dev.odq.timextender.dev/cmdservice/ExpectusCommandService.svc). In the Gateway desktop client, some paths in ‘ExmonClientConfig’ file is changed to ‘timextender’ (from ‘exmon’) during service startup. Increased the length of the data fields in Azure Data Providers so that Tenant Ids and App Ids will be fully visible The flow for sending emails from ODQ has been simplified. The default timeout for TDI package type is 6 hours and is now properly indicated in the TDI package UI in the Desktop client.FixedFixed a bug where compare query previews sometimes showed the wrong number of variance errors.  Fixed bug in the Initializing database step in On-Premise install Fixed bug where upgrading the Command Service failed when upgrading On-Premise Fixed an issue where Schedule Groups would show an incorrect icon for the Databricks package type Fixed an issue where Active Directory queries in ORC would not work if they returned 0 rows Fixed an issue where Azure Function packages would not check for the validity of credentials before starting execution Fixed an issue with the duplication and renaming of packages Fixed an issue with the duplication of Ingest packages Fixed an issue with VM names disappearing and not being saved in Cloud Optimization packages Fixed an issue where trying to create a Fabric package without any Fabric Data Providers present resulted in an "index out of range" error Fixed an issue with saving Properties changes in packages Improved error messages when Azure packages fail without any further information about the problem being sent from Azure In ODQ Portal, the ETA in the process popup now handles time zone differences between local machine and time zone setting in ODQ Desktop. Fixed an issue where images were not displayed correctly in emails Fixed an issue where pressing sync for TDI Data provider in Turnkey would timeout Better error message when there is an issue finding Entra Groups Fixed an issue where TDI packages would not sync correctly to object groups Fixed an issue with creating Entra groups and new users in ODQ Desktop Fixed an issue where execution of Databricks packages would not work Fixed an issue with duplication of Azure Cloud Optimizer packages where the Capacity option would be disabled in the duplicated package Fixed issue for Gateway trying to log control characters like esc via xml serialization.

Related products:TimeXtender Orchestration & Data Quality

Data source providers r. 2025-06-04

On 4 June, we made a hotfix release with the changes listed below.CSVVersion: 23.5.3.0 (TDI) / 1.1.5 (20.10 BU) / 16.4.6.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (Cannot access disposed object).Exact OnlineVersion: 10.2.0.0 (TDI)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.ExcelVersion: 23.7.0.0 (TDI) / 1.1.5 (20.10 BU) / 16.4.7.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (“Cannot access disposed object”). Fixed issue where Excel was now trying to process unrelated files and failing.HubspotVersion: 10.2.0.0 (TDI)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.ODATAVersion: 10.2.0.0 (TDI)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.ParquetVersion: 23.6.1.0 (TDI) / 1.0.5 (20.10 BU) / 16.4.5.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (“Cannot access disposed object”).RESTVersion: 10.2.0.0 (TDI) / 1.2.4 (20.10 BU) / 16.4.8.0 (20.10 ODX)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.XML/JSONVersion: 23.4.0.0 (TDI) / 1.0.5 (20.10 BU) / 16.4.5.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (“Cannot access disposed object”).

Related products:Data source providers