Skip to main content

Data source providers r. 2025-06-04

On 4 June, we made a hotfix release with the changes listed below.CSVVersion: 23.5.3.0 (TDI) / 1.1.5 (20.10 BU) / 16.4.6.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (Cannot access disposed object).Exact OnlineVersion: 10.2.0.0 (TDI)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.ExcelVersion: 23.7.0.0 (TDI) / 1.1.5 (20.10 BU) / 16.4.7.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (“Cannot access disposed object”). Fixed issue where Excel was now trying to process unrelated files and failing.HubspotVersion: 10.2.0.0 (TDI)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.ODATAVersion: 10.2.0.0 (TDI)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.ParquetVersion: 23.6.1.0 (TDI) / 1.0.5 (20.10 BU) / 16.4.5.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (“Cannot access disposed object”).RESTVersion: 10.2.0.0 (TDI) / 1.2.4 (20.10 BU) / 16.4.8.0 (20.10 ODX)Fixed an issue with the ‘Set empty fields as null’ feature where it was applying the null on the wrong dataset. Fixed an issue where datetime was parsed into local time format instead of UTC.XML/JSONVersion: 23.4.0.0 (TDI) / 1.0.5 (20.10 BU) / 16.4.5.0 (20.10 ODX)Fixed issue with SharePoint when reading more than one file (“Cannot access disposed object”).

Related products:Data source providers
featured-image

TimeXtender Data Integration 7017.1

Spring has turned to summer and we’re celebrating with a new release of TimeXtender Data Integration (desktop v. 7017.1). When you open the desktop application, you'll notice a new refreshed look, but we've also implemented a ton of improvements under the hood. See all the news below. NewRefreshed desktop UIWe've refreshed the design of the desktop UI with refined theme colors, and a new, but less prominent, blue accent color. As another UI improvement, we've streamlined the names and order of the show/hide options - 'show data types', 'highlight descriptions', etc. - in the View menu. We've also saved many users a few regular clicks by enabling them all by default.Choose where in the world your metadata is storedYou can now choose a metadata storage region for your organization that specifies where in the world your new instances will be created. Current options are West Europe (default), Central US & South East Asia. Choose the region closest to you for the best TDI experience.ImprovedOne-step Fabric executionIntroduced one-step executions for Prepare instances on Microsoft Fabric, where executions are now automatically added to an execution plan and processed in a single step.Warning: If you're using Microsoft Fabric for Ingest or Prepare storage, make sure the Spark Runtime Version is set to 1.2 in your Fabric Workspace settings. We're working on support for Runtime version 1.3, which is the new default in Fabric.Performance improvements for Ingest instancesWe've improved the indexes on the tables in the Ingest instance repository database for better query performance, and improved and optimized various Ingest repository database queries to reduce data load and increase speed.Updated data sourcesAlong with this release of TDI, we’ve released new updated versions of our data source providers. Among the changes for the REST-based providers - Exact online, HubSpot, etc. - are support for certificates as authentication and global table flattening. For providers for static files - CSV, Excel, etc. - we’ve fixed a few bugs, including an issue with connecting to Azure Blob Storage.  For more information, see the full release notes.Copy scripts between Tabular and PowerBIYou can now copy all custom measure scripts in a Deliver instance from Tabular to PowerBI and vice versa. For context, custom measures have a script for each endpoint type, but PowerBI and Tabular share the same DAX syntax with few exceptions. For that reason, migrating from Tabular to PowerBI endpoints could entail copy-pasting hundreds of scripts. Since that’s no fun, we implemented this little shortcut.Improved Ingest Service Configuration tool The Ingest Service Configuration tool now automatically imports deprecated 'Managed ADO.NET' data source providers from the default component folder used in previous installations of the Ingest service (known as ‘ODX SaaS’ before v. 6744.1). This change eliminates manual steps in the upgrade process for users of these data source providers as they are no longer available for download from our repository. FixedTDI PortalIn the Portal, the Instances page now loads significantly faster if you have a lot of instances. ‘Send sign-in invitation’ would sometimes fail due to password restrictions. Adding a Microsoft or Google account as a login option would fail. Fixed a bug on the Deliver Qlik endpoint causing the wrong settings to be shown in the Authentication section.TDI DesktopWhen an execution task in an Ingest instance completes with no tables included, it will now set the state to 'Complete with Warnings'. Fixed an issue where executing an execution task in the Ingest while using a case-sensitive SQL Storage database would fail when listing existing SQL objects. Fixed an issue where executing an execution task in an Ingest instance while using a case-insensitive SQL Storage database would fail for a schema where an unused version of the schema with the same name in different casing exists. Fixed a memory leak in the metadata manager. Improved the UI performance of the metadata manager. Fixed an issue where a REST data source where a renamed table would not get mapped to the old name properly. Fixed an issue with synchronizing a Prepare instance with an Ingest instance where the loading animation would disappear before the synchronization logic had applied all the changes causing the UI to freeze. Fixed various issues with Prepare instances on Fabric storage, including: errors when capacity is turned off, problems with conditional lookups, issues when transferring tables from a TimeXtender F&O data source and notebook syntax errors with some aggregate functions. Data Cleansing Procedure includes NULL checks on underlying fields on SuperNatural key - this has now been fixed including no null check on Custom Hash fields. In some cases, the repository would block execution of execution package because of a deadlock issue. This could happen when multiple execution packages were scheduled to run at the same time. Fixed an issue with the Integrate Existing Objects feature where the simple mode option on newly created data areas would have invalid settings. Fixed a miss aligned information icon in the Table Settings window. Fixed a bug that prevented column values exceeding 43,679 characters from being displayed in the table preview, and also caused the query tool to throw an exception when result values exceeded 32,000 characters. Fixed an issue where Synchronization with Remapping on Deliver instances would show an error. Resolved an issue with Qlik endpoints using certificate authentication, where specifying a non-existent certificate would result in a null reference exception. Upgraded Qlik SDK to version 16.9 to ensure compatibility with the latest Qlik Sense release. Fixed an issue where adding fields to a table in the Deliver instance by dragging them from the Data Movement Pane would position them below the Relations node. Fixed an issue where Generate End-to-End Tasks and Packages would fail if the flow included an Ingest and the Ingest was not open in TDI. Fixed an issue that would cause the error "Execution package [name] is already running" when executing a migrated package in multiple environments at the same time

Related products:TimeXtender Data IntegrationTimeXtender Data Integration Portal

Data source providers r. 2025-06-03

Today, we’ve released updated data source providers. See the changes below.CSVVersion: 23.4.3.0 (TDI) / 1.1.4 (20.10 BU) / 16.4.5.0 (20.10 ODX)Fixed a bug where connecting to Azure Blob storage did not work. Fixed a bug where Skip Top would not apply to all aggregated files.Exact OnlineVersion: 10.0.0.0 + 9.5.0.0 (TDI)Added support for certificates. Added support for setting a culture when interpreting data types. Added support for global table flattening. Changed override headers behavior. It will now not remove all headers, it will instead replace the headers that are defined in the list. To remove a header, add it with empty value. Fixed a bug where running in parallel could produce duplicate headers for authentication.ExcelVersion: 23.5.0.0 (TDI) / 1.1.3 (20.10 BU) / 16.4.5.0 (20.10 ODX)Improved logging when reading files, making it easier to track down problematic files. Fixed a bug where connecting to Azure Blob storage did not work. Fixed a bug where having a . in a folder name would cause it to try and read it as a file.HubspotVersion: 10.0.0.0 + 9.5.0.0 (TDI)Added support for certificates. Added support for setting a culture when interpreting data types. Added support for global table flattening. Changed override headers behavior. It will now not remove all headers, it will instead replace the headers that are defined in the list. To remove a header, add it with empty value. Fixed a bug where running in parallel could produce duplicate headers for authentication.ODATAVersion: 10.0.0.0 + 9.5.0.0 (TDI)Added support for certificates. Added support for setting a culture when interpreting data types. Added support for global table flattening. Changed override headers behavior. It will now not remove all headers, it will instead replace the headers that are defined in the list. To remove a header, add it with empty value. Fixed a bug where running in parallel could produce duplicate headers for authentication.ParquetVersion: 23.5.1.0 (TDI) / 1.0.4 (20.10 BU) / 16.4.4.0 (20.10 ODX)Fixed a bug where connecting to Azure Blob storage did not work. Fixed a bug where loading data from a file with multiple row groups would not work.RESTVersion: 10.0.0.0 + 9.5.0.0 (TDI) / 1.2.2.0 (20.10 BU) / 16.4.6.0 (20.10 ODX)Added support for certificates. Added support for setting a culture when interpreting data types. Added support for global table flattening. Changed override headers behavior. It will now not remove all headers, it will instead replace the headers that are defined in the list. To remove a header, add it with empty value. Fixed a bug where running in parallel could produce duplicate headers for authentication. Fixed bug with preview table not working in Business Unit.XML/JSONVersion: 23.3.0.0 (TDI) / 1.0.4 (20.10 BU) / 16.4.4.0 (20.10 ODX)Fixed a bug where connecting to Azure Blob storage did not work.

Related products:Data source providers

Data source providers r. 2025-05-15

Today, we've released updated data source providers. See the changes below.CSVVersion: 23.3.30 (TDI) / 1.1.3 (20.10 BU) / 16.4.4.0 (20.10 ODX)Added default file types. Fixed bug where parallel execution could lock files. Fixed missing root path handling for test connection when using SharePoint connection. Fixed a bug where the ordinals of the columns were not preserved when there are no headers.Exact Online Version: 9.2.0.0 (TDI)Fixed a bug where an empty header name could cause an issue.ExcelVersion: 23.3.0.0 (TDI) / 1.1.1 (20.10 BU) / 16.4.3.0 (20.10 ODX)Added default file types. Fixed bug where parallel execution could lock files. Fixed missing root path handling for test connection when using SharePoint connection. Fixed bug where all columns had to be selected in order to load data into Azure Data Lake.HubspotVersion: 9.2.0.0 (TDI)Fixed a bug where an empty header name could cause an issue.ODataVersion: 9.2.0.0 (TDI)Fixed a bug where an empty header name could cause an issue.OracleVersion: 23.1.4.0 (TDI) / 17.1.0.0 (TDI ADF) / 1.0.1 (20.10 BU) / 16.4.1.0 (20.10 ODX) / 10.4.1.0 (20.10 ODX ADF)Added support for ‘RAW’ and ‘LONG RAW’ data types. ‘RAW’ will be translated to a ‘varbinary(2000)’, and ‘LONG RAW’ will be translated to ‘varbinary(max)’. Fixed an issue where data types not recognized by the Oracle data source would throw an exception instead of marking them as ‘Unknown’.ParquetVersion: 23.4.1.0 (TDI) / 1.0.3 (20.10 BU) / 16.4.3.0 (20.10 ODX)Added default files types. Fixed bug where parallel execution could lock files. Fixed missing root path handling for test connection when using SharePoint connection. Updated Parquet library to support the latest Parquet metadata standards.RESTVersion: 9.2.0.0 (TDI) / 1.0.2 (20.10 BU) / 16.4.2.0 (20.10 ODX)Fixed a bug where an empty header name could cause an issue.XML/JSONVersion: 23.2.0.0 (TDI) / 1.0.3 (20.10 BU) / 16.4.3.0 (20.10 ODX)Added default file types. Fixed bug where parallel execution could lock files. Fixed missing root path handling for test connection when using SharePoint connection. Fixed issue where table flattening could not execute in the BU and ODX versions.

Related products:Data source providers

TimeXtender Data Integration 6963.1

Today, we’ve published a minor release of TimeXtender Data Integration (v. 6963.1) that contains the changes listed below.ImprovedImproved the Ingest logic that manages data source providers to no longer try to download deprecated providers which caused confusing error messages. Renamed display names to no longer include "Semantic" Removed the limitation of reserved words for custom field validations and custom conditionsFixedFixed a wrong icon for super natural key fields in data lineage and Perpare table selection. Fixed various typos in the TDI application. Fixed an issue where the Metadata Manager in Ingest would produce a change notification for columns without the ‘original data type’ metadata. Fixed an issue with deploying Hierarchy tables with ‘Null check approach’ set to ‘Record Based’ Fixed an issue in Ingest on Fabric Lakehouse storage where tables were not transferred correctly  if they contained ancient dates. Fixed an issue where the ‘StepODXDataFactoryExecute’ was not cleaned up when the last table was removed from an execution package that allowed that step to exist. Fixed an issue where Convert to Mapping Set was incorrect for fields with custom transformations. Fixed an issue where Integrate Existing Objects was affecting existing views. 23785: Primary key check is getting skipped when a table has no mappings This issue was corrected - It affected tables with no mappings but having custom data, table inserts, and/or related records.

Related products:TimeXtender Data Integration

TimeXtender Orchestration & Data Quality and TimeXtender Data Enrichment 25.1

It’s our pleasure to announce release 25.1.0 of TimeXtender Data Enrichment and TimeXtender Orchestration and Data Quality, featuring exciting new updates and enhancements.SummaryThis update introduces Azure Databricks integration, enabling job execution via orchestration packages. The Data Transfer package now supports SQL MERGE for updating destination tables, with new UI options for primary keys, custom values, and column collation. Data Enrichment web enforces desktop column restrictions with clear error indicators. Bug fixes enhance hierarchy limits, lookup columns, scheduling, data imports, and dataset publishing.Our releases follow a phased rollout to ensure stability and performance. We begin by upgrading a select set of services for initial testing. After that, we gradually roll out the update to all customers, starting with low-risk environments and expanding systematically. Customers who prefer an earlier upgrade can request one at any time, by sending a request to our support, and we will schedule their update on an agreed date.GeneralAccess to Previous VersionsUsers can now easily download executable versions of O&DQ and Data Enrichment via the provided links to TimeXtender's SharePoint. These versions don't require installation, allowing for seamless switching between different versions as needed. No special permissions are needed to access the links, and further details are available here.TimeXtender Orchestration & Data QualityAzure Databricks PackageThe option to connect to and run Azure Databricks jobs from TimeXtender Data Orchestration has been added. To use this option the user must first create an Azure Connection Data Provider with the authentication information for the Databricks job to be executed by the package. Then, an Orchestration package can be created to connect to and run the Databricks job. Read more about Azure Databricks packages here.Data Transfer Package UpdateThis release includes an update to the existing Data Transfer package. Enabling the 'merging' option will prompt the Data Transfer package to create a staging table, which will be used as a source for an SQL MERGE query, allowing updates to existing entries in the destination table. The O&DQ UI will also offer options to select the Primary Key, use a Custom Value to replace values in a destination column, and define collation at the column level when selecting data from the source. Read more about this new feature here.TimeXtender Data EnrichmentRequired columns in web applicationData Enrichment Web now enforces the column restrictions set up in Data Enrichment Desktop. This version ensures that all column restrictions are applied in the web interface. The Web will prevent you from saving changes until all restriction violations are resolved. A small red 'X' will indicate which cell requires attention, and hovering over it will explain the rule being violated—just like in Desktop.Bug fixes and smaller improvements TimeXtender Data EnrichmentThe maximum height in Hierarchies has been increased to 10. Disabling users prompted them to update their subscription. Hierarchy attributes used as lookup columns did not work. Import failed for decimal columns. Saving an existing "Import from Database" action caused the action to break. Pre/post execution did not work when importing from the database. Importing from the database into a lookup column did not work. The schema drop-down was not ordered alphabetically. If the Embedded view was in "Recently Used," the start page showed an error. Users could not open tables with a lookup column if the lookup table had been deleted. Added support for availability group connection strings.TimeXtender Orchestration and Data Quality (O&DQ)Compare query column names were updated when the column type was changed. New schedule groups were automatically assigned holidays. Running the schedule manually did not work. The SharePoint Data Provider did not work. The Data Transfer package could not create a table with two or more unique fields. The Help page was unusable. The Data Transfer package was not using the configured timeout. The schedule overview was shown as empty for TX-only customers. No holidays were displayed for TX-only customers. The sync process in the process map did not work for packages. The next run for a schedule did not update when changes were made to the schedule and saved. Creating a cloud optimizer package did not work. Active Directory queries did not work when they returned zero rows. The schedule for a package was not displayed in package properties. When a package's Windows process died, the package could not be executed again. Users stopped being able to execute tasks after having the Desktop application open for a while.TurnkeyNot all drop-downs order their values alphabetically. Columns added to a data source were defaulted to hidden in datasets. Preview did not work when a dataset column had a comma in the name. Filter in the rule automatically changed from "is not blank" to "does not equal null." An existing rule stopped working if the dataset name was updated. Links in the Exception action did not work properly. A published rule did not run if it was edited unless it was published again. Publishing a dataset was not possible from the column settings tab.

Related products:TimeXtender Data EnrichmentTimeXtender Orchestration & Data Quality
featured-image

TimeXtender Data Integration 6926.1

Two months into 2025 we're ready with the second major release of the year. Even though it's only been a month since the last major release, we have a lot of good stuff for you, including access to instance metadata for analysis, new data source providers, and a couple of much-requested features for Deliver and Ingest instances. And especially for partners, the new blueprints feature can be a real timesaver.When you upgrade to the new release, the Ingest service and data sources must be upgraded at the same time (i.e. you cannot upgrade the ingest service without upgrading data sources or vice versa). The reason is that we’ve redesigned the data sources architecture to enable the TDI TimeXtender providers in both V20 and Classic. See Compatibility of Data Sources and TDI/TIS Versions for an overviewNewCollect metadata and logs from instances for analysis (public preview)If you'd like detailed statistics on execution times, or any other metadata created by TimeXtender, this release is good news for you. With the new meta-collection feature in the Portal, you can analyze TimeXtender metadata and logs - in TimeXtender! 24 hours' worth of metadata and logs from the instances you select are exported to a data lake hosted by TimeXtender once a day. Using a regular TimeXtender data source, configured for you with the click of a button, you can copy the data into TimeXtender just like any other data source. Note that you'll need to be on the latest version of TDI.Three new data source providersIn our quest to provide high-quality first-party data source providers for basically everything, we've added three new providers:XML & JSON joins the CSV, Parquet and Excel providers for common data files.  Azure Data Factory - SAP Table enables connection to SAP through Azure Data Factory. Infor SunSystems makes the existing business unit data source available in TDI in an updated form that supports SunSystems version 5 and up.TimeXtender Enhanced data source providers replace CDataFrom this release, the TimeXtender Enhanced data source providers replace the third-party 'Managed ADO.net' providers from CData. As we're no longer distributing CData providers, they will not receive updates and no new providers are available for use. If you have data sources that use CData providers, we recommend that you begin migration to the TimeXtender Enhanced providers. For more information on how to change a data source provider please see: Data selection for Deliver endpointsWe've added support for data selection, instance variables and usage conditions in deliver instances. These features have long been available in the Prepare instance and make data selection rules on tables much more versatile. Adding these features to the deliver instance, makes it possible to, for example, use the same Deliver instance to deploy endpoints with different data (e.g. departmental data) in each endpoint.Add timestamp to tables in the Ingest instanceIf you'd like to know when data has been copied from a specific source, you can now have the good old DW_Timestamp column added to tables in the Ingest instance. For now, this is supported when you use Azure Data Lake Storage, Fabric, or SQL as your Ingest instance storage.Partners - share instance blueprints between customers  (public preview) As a partner working with many TimeXtender customers with roughly the same setup, you might feel a slight deja vu when you create the same data warehouse structure for the third time. Because time matters, we've created the blueprints feature to save you from that repetitive work.A blueprint is an instance where anything remotely sensitive, such as logs and usernames, is removed. In the new version, you can, with the consent of the customers, share a blueprint of Customer A's instancewith Customer B. Once a blueprint has been shared, Customer B can add a new instance based on that blueprint instead of starting from scratch.ImprovedImproved UI for setting up REST data source connectionsWe've improved the experience when setting up TimeXtender REST data source connections so that you can show and hide the sections that matter to you, as well as adding additional validations for essential fields. In addition to that, based on feedback that the old name could be misleading, the “global values” setting has been renamed to “connection variables”.Edit deleted instancesYou can now edit deleted instances. If this sounds like something you’re not likely to do, you’re right, but it can be useful in a few edge cases. For example, you can rename a deleted instance if you want to create a new instance with the same name.FixedTDI PortalIt wasn't possible to rename an environment to the same string with different capitalization (e.g. "Prod" -> "PROD") On the Instances page, fixed an issue with deleting environments containing only deleted instances Fixed a bug that would allow a mapped data source connection to be deleted after upgrading it to the most recent version Filters are now still applied after deleting a data source connection. Fixed a bug in the REST provider where the connection variables were not applied to the dynamic values from endpoint query. We now take into consideration the value of ‘Empty fields as null’ when finding data types. This can help find the correct data types when the data is a mix of values and empty values/null. Updated the look of the 'Multi-factor sign-in' card on the 'Basic info' page to fix a visual inconsistency. When you migrate an Ingest instance from one environment to another, we've made the error message more useful should the validation of the data source mappings fail.TimeXtender Data IntegrationIn the Create Index window, it was impossible to see all fields if you had a lot of fields on a table since the list did not have a scroll bar. Using the Skip option when loading tables for the Ingest data source query tool failed with a null parameter exception.

Related products:TimeXtender Data IntegrationTimeXtender Data Integration Portal
featured-image

TimeXtender Data Integration 6892.1

The first major release of 2025 brings new time-saving features for data source connections, an improved REST data source provider, less "simple" Fabric Prepare instance storage, and a whole lot of smaller changes. Check out the full list below! New Import and export data source connectionsYou can now import and export data source connections to use in different accounts, save for later, or share with coworkers. You can of course use import to create new data source connections, but you can also override the data of an existing connection. In case some fields cannot be mapped to the connection when you import, the mismatched fields will be listed so you can decide what to do.Two import/export formats are supported: The connection string ("key=value;key2=value") used by the ODX in TimeXtender 20.10, and a JSON-based connection profile that includes more extensive information. No matter the format, passwords, and other sensitive fields, will always be excluded from the export.  Change data source providerWe added the option to change the provider of a data source connection. While you can change freely between providers, changing from one SQL provider to another is obviously less of a hassle than changing from SQL to a REST provider. If the old and new providers don't have exactly the same fields, you'll be able to review the mismatches and make adjustments before committing the change. Customize code for Prepare instances on Fabric Lakehouse (Public Preview)When you're using Microsoft Fabric Lakehouse as Prepare instance storage, you can now add inline scripts that are executed along with the Fabric Notebooks created by TimeXtender Data Integration. With this, the customize code feature is now also supported for Prepare instances on Fabric. Improved Improved REST data source providerWe've made a ton of improvements to the REST provider to make it more flexible and able to support more data sources. You can now import connection information that follows the OpenAPI/Swagger Specification which can be a real time-saver. The new 'Authentication endpoint' authentication type allows you to get authentication data, e.g. a token, from an endpoint and use that for authentication, e.g. by adding it to the header of all requests. We've also eliminated the requirement to select a table in TimeXtender Data Integration from all endpoints that the endpoint you actually wanted data from depended on. The new 'Endpoint query' dynamic values source lets you create dynamic values that combine data from multiple endpoints with a SQLite query. On top of that, the REST provider now supports data in CSV format in addition to JSON, XML, and plain text. Other changes included the following:New option on endpoints to set a delay before the first request to the endpoint as well as a delay between requests. Global values, i.e. variables that can be set once and used across endpoints. Dynamic values created by pagination are now available outside of pagination. New built-in dynamic values: TX_ExecutionTimestamp, TX_XmlFileName, TX_TableFlatteningFileName. Support for OAuth response with 'accessToken' instead of 'access_token'. Support for a custom header prefix for OAuth token. New option for setting the data format (JSON/CSV/XML/Text) explicitly instead of relying on the automatic selection logic. After converting from JSON to XML, tables with an ID column only are now removed from the metadata result. The endpoint path can now be overridden by a complete URL. Pagination can replace both the URL and the post body at the same time. New option to set empty fields to null. New option to enable debug logging. Prepare instance on Fabric Lakehouse no longer "simple" (Public Preview)When we added support for Microsoft Fabric Lakehouse as Prepare instance storage in our last major release, functionally was limited to what's possible with simple mode enable. With this release, that's no longer the case. We've added support for transformations, conditional lookup fields, supernatural keys, and aggregate tables.However, we still have some work to do since the following features are still not supported: History, custom data, custom views, junk dimensions, related records, table inserts, and hierarchy tables. While incremental load is supported, it is currently necessary to manually define an incremental selection rule on the Prepare instance. Incremental settings from the source are not automatically applied. Remember instance migration settingsYou can now save your settings when migrating an instance between environments, making it easier to reuse them later. Saved settings will automatically apply when you migrate the same instance again, but can be modified as needed before migrating.  Fixed (TDI Portal)Fixed more than a dozen smaller issues and inconsistencies in the look and feel of various tables in the Portal to create a more streamlined and user-friendly experience. Fixed an issue where right-click was disabled in tables even when there was no custom menu defined, preventing users from, e.g., opening an instance in a new tab. Fixed an issue with changing Prepare storage type. Values could visually overlap on the instance overview page. Fixed an issue where uncategorized instances could not be added when creating a new environment. Fixed an issue where timeouts were treated as shorts instead of integers on some instance pages. The Data Source Mappings section would shift when opening a dropdown. On the Instances page, the Uncategorized Instances section would be displayed even when empty. The Edit Environment modal couldn't handle long instance names. When adding a data source connection, sometimes an error message would only pop up after you had been redirected away from the page. Unclear wording in successful update message on data source connections. Removed ":" from data source connection checkboxes with no description. Fixed an issue where customers could not change basic info for their organization. In the activity log, some values related to firewall rules and restoring an instance would not be displayed correctly.  Fixed (TimeXtender Data Integration) Fixed reference to old instance type names in the job log. Fixed issue where editing a snippet-based script action could cause a "Label not found" error Fixed an issue where you could add an execution package with failure handling set to retry steps while having multiple threads enabled and managed execution disabled. Fixed an issue in Snowflake where the deployment of a table would fail if the raw schema was different from the error/warning schema. Fixed a null reference exception when deleting a table referenced in a custom data selection rule across multiple data areas. Fixed an issue where the Metadata Manager did not pick up changes when tables and columns match on an identity other than name. Fixed an issue when execution of a table in Ingest when the storage type is Fabric Lakehouse with schema and the source of the Deliver table is TimeXtender OneLake Finance & Operations Data Source would fail with "name 'PATH' is not defined" and/or "name 'FULL_LOAD' is not defined"

Related products:TimeXtender Data IntegrationTimeXtender Data Integration Portal

TimeXtender Desktop 6848.1

Today, we’ve published a hotfix release of TimeXtender Desktop (v. 200.0.6848.1) that contains the changes listed below. Fixed Fixed an issue where a deadlock could occur for the Ingest Service, when refreshing the authentication and repository connection at the same time. We know this has been a big issue, and would like to thank the partners and customers who helped incrementally test and provide feedback and knowledge so we could solve the issue.  Fixed an issue where starting a windows server in Fast Boot / Hiberboot mode with an Ingest service installed could cause the DNS issue fixed in version 6823.1. This happens as the Fast Boot mode skips the regular windows service startup flow and ignore the ingest service network validation logic. Fixed an issue when transferring data from Ingest to Prepare, where an invalid cast exception would be thrown on a decimal data type. Fixed an issue when executing a Deliver instance when the table is based on a view from a Prepare instance, where the Post Valid Table option is enabled. Fixed issue with a misleading error message when executing a transfer task for PostgreSQL data source, when incremental rules were set up and no new data was present in the source. Fixed an issue where the database cleanup tool would mark the table message functions as objects to drop. Fixed an issue where the system incorrectly attempted to drop a regular SQL table using the syntax for external tables. This led to errors during table deletion operations.

Related products:TimeXtender Data Integration
featured-image

TimeXtender Data Integration 6814.1

We hope you've had time to put the pumpkins away because now it's time for a new major release of TimeXtender (v. 6814.1). The release has a focus on data ingestion with improved synchronization from data source to Ingest instance, new data source providers, and better orchestration and scheduling, but that's not all - check out the list below! NewRedesigned metadata synchronization and table selection: We've completely reimagined how you manage metadata and select tables in the Ingest layer. With these changes, we aim to make it easier to combat schema drifts, e.g. when you change data source providers, and put you in firm control of what goes into your Ingest storage. 'Synchronize Tasks' are now known as 'Metadata Import Tasks' and will no longer do a full synchronization of the data source. Rather, it will import the metadata from the data source and store it in the data storage of the Ingest. The Data Source Explorer has become the Metadata Manager and is now the place for synchronizing data sources - selecting and mapping tables in the data source to tables in the Ingest storage - all based on the metadata imported by the Metadata Import Tasks.  Easier orchestration with synchronization from TDI: Your transfer tasks and execution packages in TimeXtender Data Integration can now be synchronized with TimeXtender Orchestration for more feature-rich orchestration and scheduling than possible with Jobs in TDI. To get started, grab an API key from the TDI portal and use it to create a new "TimeXtender Data Integration" data provider in TimeXtender Orchestration. Warning: Jobs have been deprecated. Please schedule TDI execution packages via TimeXtender Orchestration rather than using jobs Redesigned Instances page: We've redecorated the Instances page to make it easier to use. Among the changes are a new list view to complement the card-based view, collapsible cards to help you focus on the environments you're working on, and a consolidated "toolbar" with a Search box and buttons to add instances and manage environments. Prepare instance on Microsoft Fabric Lakehouse: You can now use Fabric Lakehouse as Prepare Instance storage. However, in this first version, the functionality for Prepare instances on Fabric Lakehouse is limited to what's possible with Simple Mode enabled.   New data sources: In our quest to make connecting to data sources easier and more consistent, we're ready with three new TimeXtender-branded data source providers:  Parquet (similar to the existing CSV and Excel providers), OData (similar to the existing REST provider) and Finance & Operations OneLake which supports transferring data to Ingest instances using Azure Data Lake Gen 2 or Fabric storage. If both the Ingest and Prepare instances use Fabric storage, the data will bypass the Ingest storage and be transferred directly into the Prepare storage, leading to better performance and saved storage space. Bring instances back from the dead: Possibly inspired by the recent Halloween spookiness, we've implemented a soft delete feature for instances. You can now restore a deleted instance for up to 30 days after deletion. ImprovementsThe Migrate Instance modal has been restructured into steps, includes a review section, and lets you select the source instance and environment in the modal.  In the top-right corner of the TDI Portal, you'll now find a nine-dot menu for easy navigation to TimeXtender MDM, TimeXtender DQ, and TimeXtender Orchestration. A banner on the Home page will now let you know about upcoming system maintenance. The Upgrade data source page has received a new coat of paint to match the new TDI Portal design. On CSV data sources, you can now define custom null values, such as "N/A" and "-",  in the aptly named "Null Values" field. On SAP Table data sources, we have added a Table name filter that makes it possible to filter out some of the irrelevant tables before you can even see them in TDI. This can make importing metadata from the source much faster and makes it easier to manage the notoriously large amount of tables in SAP. To prevent accidental password leakage, we've applied password protection to more relevant fields in the TimeXtender-branded data source providers. You can now connect to Azure Blob Storage (or ADLS) using principal user credentials. This applies to the TimeXtender-branded CSV, Excel, and Parquet data sources.  We've made the Ingest authentication refresh logic more robust to prevent potential issues. We've changed SQL queries to include a 30-second command timeout, preventing client lockups during cloud database issues, and improved Timextender Data Integration logging for clearer task tracking. When you upgrade TimeXtender Data Integration, you can now see more information about what is being imported from the old version in the first run of the new version. FixedOn the Migrations page in the TDI Portal, cards now accommodate longer instance names. On the Instances page in the TDI Portal, a non-data estate admin user would sometimes get "User not authorized" or "Missing data estate permission" errors. In the TDI Portal, Test Connection would return "successful connection" for non-existing paths in cloud-type locations (AWS, Azure, GCS). In TimeXtender Data Integration, we have improved the visualization of invalid data sources under Ingest instances. They'll now have "(invalid)" postfixed to their name which will be displayed in red. Fixed a "Task was canceled" error when opening TimeXtender Data Integration with over 250 instances and adjusted the HTTP timeout settings to improve loading. Using the integrate existing objects feature in TimeXtender Data Integration would sometimes cause a "duplicate key" error due to unfiltered duplicate keys. Duplicate keys are now properly handled to prevent this error. In TimeXtender Data Integration, we fixed an issue with a radio button that prevented you from switching between the Valid and Raw tables when you created indexes.   In the Filter Rows window in TimeXtender Data Integration, you could click the Preview button even when the data source did not support preview. In TimeXtender Data Integration, we fixed an issue where changes in Edit SQL Snippet Transformation were not being saved. In TimeXtender Data Integration, we have improved the message displayed when an error is thrown on Reports > Errors. In TimeXtender Data Integration, tables with selection rules would fail when dragged from one data area to another on a Prepare instance that uses Snowflake as storage. In TimeXtender Data Integration 6766.1, SAP data sources experienced degraded performance due to the accidental release of a 32-bit version of the TXIntegrationServices component. We updated the stored procedures for executing Prepare instances to sort data by 'DW_ODXBatchNumber' for insertion into the valid table during a full load. If 'DW_ODXBatchNumber' is not available, it will default to sorting by [DW_Id] in ascending order. The execution of execution packages would sometimes fail with the error "terminated unexpectedly". To solve the issue, we made the access token refresh logic more robust.  It now permits refreshes up to 4 hours before expiration, incorporates retries for failed attempts, and includes an automatic refresh when the execution service restarts. The Execution Service would ignore proxy settings when executing packages, which could result in misleading error descriptions for the end-user.  The TimeXtender REST data source provider now handles empty property names, property names that start or end with a colon, and property names with more than one colon.

Related products:TimeXtender Data Integration