We hope you've had time to put the pumpkins away because now it's time for a new major release of TimeXtender (v. 6814.1). The release has a focus on data ingestion with improved synchronization from data source to Ingest instance, new data source providers, and better orchestration and scheduling, but that's not all - check out the list below!
New
- Redesigned metadata synchronization and table selection: We've completely reimagined how you manage metadata and select tables in the Ingest layer. With these changes, we aim to make it easier to combat schema drifts, e.g. when you change data source providers, and put you in firm control of what goes into your Ingest storage. 'Synchronize Tasks' are now known as 'Metadata Import Tasks' and will no longer do a full synchronization of the data source. Rather, it will import the metadata from the data source and store it in the data storage of the Ingest. The Data Source Explorer has become the Metadata Manager and is now the place for synchronizing data sources - selecting and mapping tables in the data source to tables in the Ingest storage - all based on the metadata imported by the Metadata Import Tasks.
- Easier orchestration with synchronization from TDI: Your transfer tasks and execution packages in TimeXtender Data Integration can now be synchronized with TimeXtender Orchestration for more feature-rich orchestration and scheduling than possible with Jobs in TDI. To get started, grab an API key from the TDI portal and use it to create a new "TimeXtender Data Integration" data provider in TimeXtender Orchestration.
- Redesigned Instances page: We've redecorated the Instances page to make it easier to use. Among the changes are a new list view to complement the card-based view, collapsible cards to help you focus on the environments you're working on, and a consolidated "toolbar" with a Search box and buttons to add instances and manage environments.
- Prepare instance on Microsoft Fabric Lakehouse: You can now use Fabric Lakehouse as Prepare Instance storage. However, in this first version, the functionality for Prepare instances on Fabric Lakehouse is limited to what's possible with Simple Mode enabled.
- New data sources: In our quest to make connecting to data sources easier and more consistent, we're ready with three new TimeXtender-branded data source providers: Parquet (similar to the existing CSV and Excel providers), OData (similar to the existing REST provider) and Finance & Operations OneLake which supports transferring data to Ingest instances using Azure Data Lake Gen 2 or Fabric storage. If both the Ingest and Prepare instances use Fabric storage, the data will bypass the Ingest storage and be transferred directly into the Prepare storage, leading to better performance and saved storage space.
- Bring instances back from the dead: Possibly inspired by the recent Halloween spookiness, we've implemented a soft delete feature for instances. You can now restore a deleted instance for up to 30 days after deletion.
Improvements
- The Migrate Instance modal has been restructured into steps, includes a review section, and lets you select the source instance and environment in the modal.
- In the top-right corner of the TDI Portal, you'll now find a nine-dot menu for easy navigation to TimeXtender MDM, TimeXtender DQ, and TimeXtender Orchestration.
- A banner on the Home page will now let you know about upcoming system maintenance.
- The Upgrade data source page has received a new coat of paint to match the new TDI Portal design.
- On CSV data sources, you can now define custom null values, such as "N/A" and "-", in the aptly named "Null Values" field.
- On SAP Table data sources, we have added a Table name filter that makes it possible to filter out some of the irrelevant tables before you can even see them in TDI. This can make importing metadata from the source much faster and makes it easier to manage the notoriously large amount of tables in SAP.
- To prevent accidental password leakage, we've applied password protection to more relevant fields in the TimeXtender-branded data source providers.
- You can now connect to Azure Blob Storage (or ADLS) using principal user credentials. This applies to the TimeXtender-branded CSV, Excel, and Parquet data sources.
- We've made the Ingest authentication refresh logic more robust to prevent potential issues.
- We've changed SQL queries to include a 30-second command timeout, preventing client lockups during cloud database issues, and improved Timextender Data Integration logging for clearer task tracking.
- When you upgrade TimeXtender Data Integration, you can now see more information about what is being imported from the old version in the first run of the new version.
Fixed
- On the Migrations page in the TDI Portal, cards now accommodate longer instance names.
- On the Instances page in the TDI Portal, a non-data estate admin user would sometimes get "User not authorized" or "Missing data estate permission" errors.
- In the TDI Portal, Test Connection would return "successful connection" for non-existing paths in cloud-type locations (AWS, Azure, GCS).
- In TimeXtender Data Integration, we have improved the visualization of invalid data sources under Ingest instances. They'll now have "(invalid)" postfixed to their name which will be displayed in red.
- Fixed a "Task was canceled" error when opening TimeXtender Data Integration with over 250 instances and adjusted the HTTP timeout settings to improve loading.
- Using the integrate existing objects feature in TimeXtender Data Integration would sometimes cause a "duplicate key" error due to unfiltered duplicate keys. Duplicate keys are now properly handled to prevent this error.
- In TimeXtender Data Integration, we fixed an issue with a radio button that prevented you from switching between the Valid and Raw tables when you created indexes.
- In the Filter Rows window in TimeXtender Data Integration, you could click the Preview button even when the data source did not support preview.
- In TimeXtender Data Integration, we fixed an issue where changes in Edit SQL Snippet Transformation were not being saved.
- In TimeXtender Data Integration, we have improved the message displayed when an error is thrown on Reports > Errors.
- In TimeXtender Data Integration, tables with selection rules would fail when dragged from one data area to another on a Prepare instance that uses Snowflake as storage.
- In TimeXtender Data Integration 6766.1, SAP data sources experienced degraded performance due to the accidental release of a 32-bit version of the TXIntegrationServices component.
- We updated the stored procedures for executing Prepare instances to sort data by 'DW_ODXBatchNumber' for insertion into the valid table during a full load. If 'DW_ODXBatchNumber' is not available, it will default to sorting by sDW_Id] in ascending order.
- The execution of execution packages would sometimes fail with the error "terminated unexpectedly". To solve the issue, we made the access token refresh logic more robust. It now permits refreshes up to 4 hours before expiration, incorporates retries for failed attempts, and includes an automatic refresh when the execution service restarts.
- The Execution Service would ignore proxy settings when executing packages, which could result in misleading error descriptions for the end-user.
- The TimeXtender REST data source provider now handles empty property names, property names that start or end with a colon, and property names with more than one colon.