Skip to main content

Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings
  • 88 Product updates

TimeXtender Orchestration & Data Quality and TimeXtender Master Data Management 25.1

It’s our pleasure to announce release 25.1.0 of TimeXtender Master Data Management and TimeXtender Orchestration and Data Quality, featuring exciting new updates and enhancements. Summary This update introduces Azure Databricks integration, enabling job execution via orchestration packages. The Data Transfer package now supports SQL MERGE for updating destination tables, with new UI options for primary keys, custom values, and column collation. MDM web enforces desktop column restrictions with clear error indicators. Bug fixes enhance hierarchy limits, lookup columns, scheduling, data imports, and dataset publishing. Our releases follow a phased rollout to ensure stability and performance. We begin by upgrading a select set of services for initial testing. After that, we gradually roll out the update to all customers, starting with low-risk environments and expanding systematically. Customers who prefer an earlier upgrade can request one at any time, by sending a request to our support, and we will schedule their update on an agreed date. General Access to Previous Versions Users can now easily download executable versions of O&DQ and MDM via the provided links to TimeXtender's SharePoint. These versions don't require installation, allowing for seamless switching between different versions as needed. No special permissions are needed to access the links, and further details are available here. TimeXtender Orchestration & Data Quality Azure Databricks Package The option to connect to and run Azure Databricks jobs from TimeXtender Data Orchestration has been added. To use this option the user must first create an Azure Connection Data Provider with the authentication information for the Databricks job to be executed by the package. Then, an Orchestration package can be created to connect to and run the Databricks job. Read more about Azure Databricks packages here. Data Transfer Package Update This release includes an update to the existing Data Transfer package. Enabling the 'merging' option will prompt the Data Transfer package to create a staging table, which will be used as a source for an SQL MERGE query, allowing updates to existing entries in the destination table. The O&DQ UI will also offer options to select the Primary Key, use a Custom Value to replace values in a destination column, and define collation at the column level when selecting data from the source. Read more about this new feature here. TimeXtender Master Data Management Required columns in web application MDM Web now enforces the column restrictions set up in MDM Desktop. This version ensures that all column restrictions are applied in the web interface. The Web will prevent you from saving changes until all restriction violations are resolved. A small red 'X' will indicate which cell requires attention, and hovering over it will explain the rule being violated—just like in Desktop. Bug fixes and smaller improvements TimeXtender Master Data Management (MDM) The maximum height in Hierarchies has been increased to 10. Disabling users prompted them to update their subscription. Hierarchy attributes used as lookup columns did not work. Import failed for decimal columns. Saving an existing "Import from Database" action caused the action to break. Pre/post execution did not work when importing from the database. Importing from the database into a lookup column did not work. The schema drop-down was not ordered alphabetically. If the Embedded view was in "Recently Used," the start page showed an error. Users could not open tables with a lookup column if the lookup table had been deleted. Added support for availability group connection strings. TimeXtender Orchestration and Data Quality (O&DQ) Compare query column names were updated when the column type was changed. New schedule groups were automatically assigned holidays. Running the schedule manually did not work. The SharePoint Data Provider did not work. The Data Transfer package could not create a table with two or more unique fields. The Help page was unusable. The Data Transfer package was not using the configured timeout. The schedule overview was shown as empty for TX-only customers. No holidays were displayed for TX-only customers. The sync process in the process map did not work for packages. The next run for a schedule did not update when changes were made to the schedule and saved. Creating a cloud optimizer package did not work. Active Directory queries did not work when they returned zero rows. The schedule for a package was not displayed in package properties. When a package's Windows process died, the package could not be executed again. Users stopped being able to execute tasks after having the Desktop application open for a while. Turnkey Not all drop-downs order their values alphabetically. Columns added to a data source were defaulted to hidden in datasets. Preview did not work when a dataset column had a comma in the name. Filter in the rule automatically changed from "is not blank" to "does not equal null." An existing rule stopped working if the dataset name was updated. Links in the Exception action did not work properly. A published rule did not run if it was edited unless it was published again. Publishing a dataset was not possible from the column settings tab.

Related products:Exmon
featured-image

TimeXtender Data Integration 6926.1

Two months into 2025 we're ready with the second major release of the year. Even though it's only been a month since the last major release, we have a lot of good stuff for you, including access to instance metadata for analysis, new data source providers, and a couple of much-requested features for Deliver and Ingest instances. And especially for partners, the new blueprints feature can be a real timesaver. When you upgrade to the new release, the Ingest service and data sources must be upgraded at the same time (i.e. you cannot upgrade the ingest service without upgrading data sources or vice versa). The reason is that we’ve redesigned the data sources architecture to enable the TDI TimeXtender providers in both V20 and Classic. See Compatibility of Data Sources and TDI/TIS Versions for an overview New Collect metadata and logs from instances for analysis (public preview) If you'd like detailed statistics on execution times, or any other metadata created by TimeXtender, this release is good news for you. With the new meta-collection feature in the Portal, you can analyze TimeXtender metadata and logs - in TimeXtender! 24 hours' worth of metadata and logs from the instances you select are exported to a data lake hosted by TimeXtender once a day. Using a regular TimeXtender data source, configured for you with the click of a button, you can copy the data into TimeXtender just like any other data source. Note that you'll need to be on the latest version of TDI. Three new data source providers In our quest to provide high-quality first-party data source providers for basically everything, we've added three new providers: XML & JSON joins the CSV, Parquet and Excel providers for common data files. Azure Data Factory - SAP Table enables connection to SAP through Azure Data Factory. Infor SunSystems makes the existing business unit data source available in TDI in an updated form that supports SunSystems version 5 and up. TimeXtender Enhanced data source providers replace CData From this release, the TimeXtender Enhanced data source providers replace the third-party 'Managed ADO.net' providers from CData. As we're no longer distributing CData providers, they will not receive updates and no new providers are available for use. If you have data sources that use CData providers, we recommend that you begin migration to the TimeXtender Enhanced providers. For more information on how to change a data source provider please see: Data selection for Deliver endpoints We've added support for data selection, instance variables and usage conditions in deliver instances. These features have long been available in the Prepare instance and make data selection rules on tables much more versatile. Adding these features to the deliver instance, makes it possible to, for example, use the same Deliver instance to deploy endpoints with different data (e.g. departmental data) in each endpoint. Add timestamp to tables in the Ingest instance If you'd like to know when data has been copied from a specific source, you can now have the good old DW_Timestamp column added to tables in the Ingest instance. For now, this is supported when you use Azure Data Lake Storage, Fabric, or SQL as your Ingest instance storage. Partners - share instance blueprints between customers (public preview) As a partner working with many TimeXtender customers with roughly the same setup, you might feel a slight deja vu when you create the same data warehouse structure for the third time. Because time matters, we've created the blueprints feature to save you from that repetitive work. A blueprint is an instance where anything remotely sensitive, such as logs and usernames, is removed. In the new version, you can, with the consent of the customers, share a blueprint of Customer A's instance with Customer B. Once a blueprint has been shared, Customer B can add a new instance based on that blueprint instead of starting from scratch. Improved Improved UI for setting up REST data source connections We've improved the experience when setting up TimeXtender REST data source connections so that you can show and hide the sections that matter to you, as well as adding additional validations for essential fields. In addition to that, based on feedback that the old name could be misleading, the “global values” setting has been renamed to “connection variables”. Edit deleted instances You can now edit deleted instances. If this sounds like something you’re not likely to do, you’re right, but it can be useful in a few edge cases. For example, you can rename a deleted instance if you want to create a new instance with the same name. Fixed TDI Portal It wasn't possible to rename an environment to the same string with different capitalization (e.g. "Prod" -> "PROD") On the Instances page, fixed an issue with deleting environments containing only deleted instances Fixed a bug that would allow a mapped data source connection to be deleted after upgrading it to the most recent version Filters are now still applied after deleting a data source connection. Fixed a bug in the REST provider where the connection variables were not applied to the dynamic values from endpoint query. We now take into consideration the value of ‘Empty fields as null’ when finding data types. This can help find the correct data types when the data is a mix of values and empty values/null. Updated the look of the 'Multi-factor sign-in' card on the 'Basic info' page to fix a visual inconsistency. When you migrate an Ingest instance from one environment to another, we've made the error message more useful should the validation of the data source mappings fail. TimeXtender Data Integration In the Create Index window, it was impossible to see all fields if you had a lot of fields on a table since the list did not have a scroll bar. Using the Skip option when loading tables for the Ingest data source query tool failed with a null parameter exception.

Related products:TimeXtender Data IntegrationTimeXtender Portal
featured-image

TimeXtender Data Integration 6892.1

The first major release of 2025 brings new time-saving features for data source connections, an improved REST data source provider, less "simple" Fabric Prepare instance storage, and a whole lot of smaller changes. Check out the full list below! New Import and export data source connections You can now import and export data source connections to use in different accounts, save for later, or share with coworkers. You can of course use import to create new data source connections, but you can also override the data of an existing connection. In case some fields cannot be mapped to the connection when you import, the mismatched fields will be listed so you can decide what to do. Two import/export formats are supported: The connection string ("key=value;key2=value") used by the ODX in TimeXtender 20.10, and a JSON-based connection profile that includes more extensive information. No matter the format, passwords, and other sensitive fields, will always be excluded from the export. Change data source provider We added the option to change the provider of a data source connection. While you can change freely between providers, changing from one SQL provider to another is obviously less of a hassle than changing from SQL to a REST provider. If the old and new providers don't have exactly the same fields, you'll be able to review the mismatches and make adjustments before committing the change. Customize code for Prepare instances on Fabric Lakehouse (Public Preview) When you're using Microsoft Fabric Lakehouse as Prepare instance storage, you can now add inline scripts that are executed along with the Fabric Notebooks created by TimeXtender Data Integration. With this, the customize code feature is now also supported for Prepare instances on Fabric. Improved Improved REST data source provider We've made a ton of improvements to the REST provider to make it more flexible and able to support more data sources. You can now import connection information that follows the OpenAPI/Swagger Specification which can be a real time-saver. The new 'Authentication endpoint' authentication type allows you to get authentication data, e.g. a token, from an endpoint and use that for authentication, e.g. by adding it to the header of all requests. We've also eliminated the requirement to select a table in TimeXtender Data Integration from all endpoints that the endpoint you actually wanted data from depended on. The new 'Endpoint query' dynamic values source lets you create dynamic values that combine data from multiple endpoints with a SQLite query. On top of that, the REST provider now supports data in CSV format in addition to JSON, XML, and plain text. Other changes included the following: New option on endpoints to set a delay before the first request to the endpoint as well as a delay between requests. Global values, i.e. variables that can be set once and used across endpoints. Dynamic values created by pagination are now available outside of pagination. New built-in dynamic values: TX_ExecutionTimestamp, TX_XmlFileName, TX_TableFlatteningFileName. Support for OAuth response with 'accessToken' instead of 'access_token'. Support for a custom header prefix for OAuth token. New option for setting the data format (JSON/CSV/XML/Text) explicitly instead of relying on the automatic selection logic. After converting from JSON to XML, tables with an ID column only are now removed from the metadata result. The endpoint path can now be overridden by a complete URL. Pagination can replace both the URL and the post body at the same time. New option to set empty fields to null. New option to enable debug logging. Prepare instance on Fabric Lakehouse no longer "simple" (Public Preview) When we added support for Microsoft Fabric Lakehouse as Prepare instance storage in our last major release, functionally was limited to what's possible with simple mode enable. With this release, that's no longer the case. We've added support for transformations, conditional lookup fields, supernatural keys, and aggregate tables. However, we still have some work to do since the following features are still not supported: History, custom data, custom views, junk dimensions, related records, table inserts, and hierarchy tables. While incremental load is supported, it is currently necessary to manually define an incremental selection rule on the Prepare instance. Incremental settings from the source are not automatically applied. Remember instance migration settings You can now save your settings when migrating an instance between environments, making it easier to reuse them later. Saved settings will automatically apply when you migrate the same instance again, but can be modified as needed before migrating. Fixed (TDI Portal) Fixed more than a dozen smaller issues and inconsistencies in the look and feel of various tables in the Portal to create a more streamlined and user-friendly experience. Fixed an issue where right-click was disabled in tables even when there was no custom menu defined, preventing users from, e.g., opening an instance in a new tab. Fixed an issue with changing Prepare storage type. Values could visually overlap on the instance overview page. Fixed an issue where uncategorized instances could not be added when creating a new environment. Fixed an issue where timeouts were treated as shorts instead of integers on some instance pages. The Data Source Mappings section would shift when opening a dropdown. On the Instances page, the Uncategorized Instances section would be displayed even when empty. The Edit Environment modal couldn't handle long instance names. When adding a data source connection, sometimes an error message would only pop up after you had been redirected away from the page. Unclear wording in successful update message on data source connections. Removed ":" from data source connection checkboxes with no description. Fixed an issue where customers could not change basic info for their organization. In the activity log, some values related to firewall rules and restoring an instance would not be displayed correctly. Fixed (TimeXtender Data Integration) Fixed reference to old instance type names in the job log. Fixed issue where editing a snippet-based script action could cause a "Label not found" error Fixed an issue where you could add an execution package with failure handling set to retry steps while having multiple threads enabled and managed execution disabled. Fixed an issue in Snowflake where the deployment of a table would fail if the raw schema was different from the error/warning schema. Fixed a null reference exception when deleting a table referenced in a custom data selection rule across multiple data areas. Fixed an issue where the Metadata Manager did not pick up changes when tables and columns match on an identity other than name. Fixed an issue when execution of a table in Ingest when the storage type is Fabric Lakehouse with schema and the source of the Deliver table is TimeXtender OneLake Finance & Operations Data Source would fail with "name 'PATH' is not defined" and/or "name 'FULL_LOAD' is not defined"

Related products:TimeXtender Data IntegrationTimeXtender Portal

TimeXtender Desktop 6848.1

Today, we’ve published a hotfix release of TimeXtender Desktop (v. 200.0.6848.1) that contains the changes listed below. Fixed Fixed an issue where a deadlock could occur for the Ingest Service, when refreshing the authentication and repository connection at the same time. We know this has been a big issue, and would like to thank the partners and customers who helped incrementally test and provide feedback and knowledge so we could solve the issue. Fixed an issue where starting a windows server in Fast Boot / Hiberboot mode with an Ingest service installed could cause the DNS issue fixed in version 6823.1. This happens as the Fast Boot mode skips the regular windows service startup flow and ignore the ingest service network validation logic. Fixed an issue when transferring data from Ingest to Prepare, where an invalid cast exception would be thrown on a decimal data type. Fixed an issue when executing a Deliver instance when the table is based on a view from a Prepare instance, where the Post Valid Table option is enabled. Fixed issue with a misleading error message when executing a transfer task for PostgreSQL data source, when incremental rules were set up and no new data was present in the source. Fixed an issue where the database cleanup tool would mark the table message functions as objects to drop. Fixed an issue where the system incorrectly attempted to drop a regular SQL table using the syntax for external tables. This led to errors during table deletion operations.

Related products:TimeXtender Data Integration
featured-image

TimeXtender Data Integration 6814.1

We hope you've had time to put the pumpkins away because now it's time for a new major release of TimeXtender (v. 6814.1). The release has a focus on data ingestion with improved synchronization from data source to Ingest instance, new data source providers, and better orchestration and scheduling, but that's not all - check out the list below! New Redesigned metadata synchronization and table selection: We've completely reimagined how you manage metadata and select tables in the Ingest layer. With these changes, we aim to make it easier to combat schema drifts, e.g. when you change data source providers, and put you in firm control of what goes into your Ingest storage. 'Synchronize Tasks' are now known as 'Metadata Import Tasks' and will no longer do a full synchronization of the data source. Rather, it will import the metadata from the data source and store it in the data storage of the Ingest. The Data Source Explorer has become the Metadata Manager and is now the place for synchronizing data sources - selecting and mapping tables in the data source to tables in the Ingest storage - all based on the metadata imported by the Metadata Import Tasks. Easier orchestration with synchronization from TDI: Your transfer tasks and execution packages in TimeXtender Data Integration can now be synchronized with TimeXtender Orchestration for more feature-rich orchestration and scheduling than possible with Jobs in TDI. To get started, grab an API key from the TDI portal and use it to create a new "TimeXtender Data Integration" data provider in TimeXtender Orchestration. Warning: Jobs have been deprecated. Please schedule TDI execution packages via TimeXtender Orchestration rather than using jobs Redesigned Instances page: We've redecorated the Instances page to make it easier to use. Among the changes are a new list view to complement the card-based view, collapsible cards to help you focus on the environments you're working on, and a consolidated "toolbar" with a Search box and buttons to add instances and manage environments. Prepare instance on Microsoft Fabric Lakehouse: You can now use Fabric Lakehouse as Prepare Instance storage. However, in this first version, the functionality for Prepare instances on Fabric Lakehouse is limited to what's possible with Simple Mode enabled. New data sources: In our quest to make connecting to data sources easier and more consistent, we're ready with three new TimeXtender-branded data source providers: Parquet (similar to the existing CSV and Excel providers), OData (similar to the existing REST provider) and Finance & Operations OneLake which supports transferring data to Ingest instances using Azure Data Lake Gen 2 or Fabric storage. If both the Ingest and Prepare instances use Fabric storage, the data will bypass the Ingest storage and be transferred directly into the Prepare storage, leading to better performance and saved storage space. Bring instances back from the dead: Possibly inspired by the recent Halloween spookiness, we've implemented a soft delete feature for instances. You can now restore a deleted instance for up to 30 days after deletion. Improvements The Migrate Instance modal has been restructured into steps, includes a review section, and lets you select the source instance and environment in the modal. In the top-right corner of the TDI Portal, you'll now find a nine-dot menu for easy navigation to TimeXtender MDM, TimeXtender DQ, and TimeXtender Orchestration. A banner on the Home page will now let you know about upcoming system maintenance. The Upgrade data source page has received a new coat of paint to match the new TDI Portal design. On CSV data sources, you can now define custom null values, such as "N/A" and "-", in the aptly named "Null Values" field. On SAP Table data sources, we have added a Table name filter that makes it possible to filter out some of the irrelevant tables before you can even see them in TDI. This can make importing metadata from the source much faster and makes it easier to manage the notoriously large amount of tables in SAP. To prevent accidental password leakage, we've applied password protection to more relevant fields in the TimeXtender-branded data source providers. You can now connect to Azure Blob Storage (or ADLS) using principal user credentials. This applies to the TimeXtender-branded CSV, Excel, and Parquet data sources. We've made the Ingest authentication refresh logic more robust to prevent potential issues. We've changed SQL queries to include a 30-second command timeout, preventing client lockups during cloud database issues, and improved Timextender Data Integration logging for clearer task tracking. When you upgrade TimeXtender Data Integration, you can now see more information about what is being imported from the old version in the first run of the new version. Fixed On the Migrations page in the TDI Portal, cards now accommodate longer instance names. On the Instances page in the TDI Portal, a non-data estate admin user would sometimes get "User not authorized" or "Missing data estate permission" errors. In the TDI Portal, Test Connection would return "successful connection" for non-existing paths in cloud-type locations (AWS, Azure, GCS). In TimeXtender Data Integration, we have improved the visualization of invalid data sources under Ingest instances. They'll now have "(invalid)" postfixed to their name which will be displayed in red. Fixed a "Task was canceled" error when opening TimeXtender Data Integration with over 250 instances and adjusted the HTTP timeout settings to improve loading. Using the integrate existing objects feature in TimeXtender Data Integration would sometimes cause a "duplicate key" error due to unfiltered duplicate keys. Duplicate keys are now properly handled to prevent this error. In TimeXtender Data Integration, we fixed an issue with a radio button that prevented you from switching between the Valid and Raw tables when you created indexes. In the Filter Rows window in TimeXtender Data Integration, you could click the Preview button even when the data source did not support preview. In TimeXtender Data Integration, we fixed an issue where changes in Edit SQL Snippet Transformation were not being saved. In TimeXtender Data Integration, we have improved the message displayed when an error is thrown on Reports > Errors. In TimeXtender Data Integration, tables with selection rules would fail when dragged from one data area to another on a Prepare instance that uses Snowflake as storage. In TimeXtender Data Integration 6766.1, SAP data sources experienced degraded performance due to the accidental release of a 32-bit version of the TXIntegrationServices component. We updated the stored procedures for executing Prepare instances to sort data by 'DW_ODXBatchNumber' for insertion into the valid table during a full load. If 'DW_ODXBatchNumber' is not available, it will default to sorting by [DW_Id] in ascending order. The execution of execution packages would sometimes fail with the error "terminated unexpectedly". To solve the issue, we made the access token refresh logic more robust. It now permits refreshes up to 4 hours before expiration, incorporates retries for failed attempts, and includes an automatic refresh when the execution service restarts. The Execution Service would ignore proxy settings when executing packages, which could result in misleading error descriptions for the end-user. The TimeXtender REST data source provider now handles empty property names, property names that start or end with a colon, and property names with more than one colon.

Related products:TimeXtender Data Integration

Exmon Release 24.3

It’s our pleasure to announce the release 24.3 of TimeXtender Orchestration & Data Quality and TimeXtender Master Data Management, featuring exciting new updates and enhancements. Summary This release brings several enhancements to boost usability, flexibility, and user experience across TimeXtender's platforms. Key highlights include improved integration with TDI, new capabilities for optimizing cloud resources, better time zone management, and easier access to previous versions. We have also updated product naming and expanded database permissions for users. These updates demonstrate our commitment to meeting customer needs and delivering a more seamless, intuitive experience. General Access to Previous Versions Users can now easily download executable versions of O&DQ and MDM through the provided links to TimeXtender's SharePoint. These versions don't require installation, allowing for seamless switching between different versions as needed. No special permissions are required to access the links, and further details are available in this support article. Product Renaming We are excited to announce that we are in the process of renaming our products as part of our ongoing efforts to better align our offerings with the needs of our customers. As part of this transition, you may notice some changes in product names, labels, and documentation across our platform. Please be aware that this renaming is an ongoing process, and while some updates have already been implemented, others will roll out over the coming months. During this time, both old and new product names might appear in certain areas. We appreciate your understanding and patience as we work towards a smoother and more consistent experience. Data Orchestration and Data Quality (DG) TimeXtender Execution Overhaul With the latest release of TimeXtender Data Integration and TimeXtender Orchestrator and Data Quality(DG), the Orchestrator can now connect directly to Data Integration tasks and execution packages. This update reduces execution overhead, enhances control over execution order and parallelism, and provides a more visually detailed status of each task and execution package. We’ve also added a Sync button in the TimeXtender Data Provider that auto-generates Data Integration tasks and execution packages, along with a default process and process map for each Data Integration environment. This feature reduces repetitive setup across both Data Integration and Orchestrator, allowing you to create tasks in Data Integration and simply Sync in the Orchestrator to generate everything you’ve added. Cloud Optimization package The option to create a package to scale, switch on or switch off Azure resources as required before and/or after processes that rely on these resources are run, has been added to TimeXtender Data Orchestration. In order to use this package type an Azure data provider in Data Orchestration with sufficient Azure privileges to update these resources is required. Read more about Cloud Optimization packages in TimeXtender Data Orchestration here. Updated Folder Structure for Improved Clarity As part of our ongoing efforts to improve usability, we have reorganized the folder structure within our system. To make it clearer which functionalities belong to specific modules, we have created two new parent folders: Orchestration Data Quality All existing folders have been moved into these parent folders based on their respective functionality. This updated structure will make it easier for users to navigate and find relevant features under the appropriate modules. Please note that while the structure has changed, all existing functionalities remain intact. Supporting Time Zones We are excited to announce that users can now change the time zone for their service in Data Orchestration and Data Quality (DG). This update ensures that schedules will run according to the user's selected time zone, making it easier to configure schedules based on their specific location and environment. This feature also accounts for daylight saving time, ensuring that schedules always run at the correct time, without being shifted an hour forward or backward during certain months of the year. Read more about this new feature here. Azure Functions Data Provider Azure Function packages now use Data Provider for authentication. Additionally, the user interface has been updated to ensure consistency in appearance and functionality with other packages. Master Data Management (DM) Expanded Permissions Options for Database Users We have made an update to our database user management capabilities. Previously, users could create a database user and assign various permissions. We are pleased to announce that we have expanded the selection of permissions available. The following permissions have now been added to the selection: View Database State Create Schema References Bug fixes and smaller improvements Turnkey Quick navigation to rules or datasets directly from exception details. Exception overview now updates URL parameters with filter and search changes, enabling easier linking with specific filters. Gateway Data Providers now supported in Turnkey. "Report Error" button in Turnkey now automatically creates a ticket in the ticketing system. Enhanced email preview functionality. Rule filter preview improvements: hidden columns are now indicated with an icon in the header. Better mapping of rule owners. CC users are now correctly mapped in email notifications. Improved dataset preview functionality. Enhanced error handling for Data Providers creation. Improved overview and management of Data Providers. Browser tab titles now dynamically reflect the current page. Data Orchestration and Quality (DG) Email notifications were empty when a query was moved to another system or duplicated. Updated icons. TimeXtender packages could be created with wrong setup. UI was also updated. Folder could not be deleted. Excel attachment is now only created when configured in Quality & Process configuration. Added error message for Powershell packages. Email notifications were missing from Email Preview. Azure Powershell package failed in on-prem setup. Process could not be deleted. Redis cache logic enhanced to prevent error in compare queries. Moving object groups between folders was not possible. Duplicating a process with execution steps resulted in execution steps not working correctly. Limited TimeXtender version did not have user groups visible. Package parameters extended. Query snippet could not be deleted. Execution of Fabric packages failed on PROD. Limited TimeXtender version showed objects on process maps that are not a part of the light version. UI for Data Providers in Gateway was incorrect. Master Data Management (DM) Flat view in hierarchy was not working as expected. Disabled users were able to login to the portal. Format of number columns for single row editor fixed. Error when a role of a user was changed. Issues when adding new users. Changing a field to a Item list sometimes resulted in an error. Renaming a table using hierarchy resulted in lost data. Added information to user regarding limitation of changing Data type of Item List. Error when adding a new user or new user group. Error when unchecking/checking filter in desktop. How to upgrade? In Master Data Management, users in versions 22.5 and above can complete the upgrade themselves through the Desktop client. See the Guide. Contact help@exmon.com to upgrade from Exmon 22.4 or below.

Related products:Exmon

TimeXtender Data Integration 6766.1

Today, we’ve published a hotfix release of TimeXtender Data Integration and TimeXtender Ingest Service (v. 6766.1) that contains the changes listed below. Fixed Replaced the lock with a mutex to prevent thread conflicts in the workspace. Fixed a syntax error in the generated Snowflake script caused by an incorrectly placed semicolon in the data cleansing procedure. Fixed an issue where the max degree of delivery instance parallelism reset to 1 after reloading the execution service configuration. Resolved an issue preventing new fields from being added to a DW table with existing mappings. Fixed migration errors between Prepare Instances due to missing extended properties. Updated fiscal week calculation to fix a month comparison issue. Fixed a loading issue with the Custom View Tracing Object when views were split across multiple data warehouses. Updated the instance list order in the Ingest Service Configuration Tool. Fixed an issue in Synchronize with Ingest Instance where fields weren't auto-selected if field names had changed. Fixed a UI issue where the persist view info box was visible by default, and the icon misaligned when resizing the dialog. Resolved an issue where the parameter rename dialog in the custom view script was partially obscured by Windows scaling. Resolved an issue where job statuses failed to update after task completion by implementing retry logic. We fixed an issue where the dynamic role-level security table was not included in "end-to-end" Dynamic Perspectives Fixed an issue with missing command timeout for Dynamic Security queries against the "Deliver" storage Fixed an issue when adding a private data source (ADO.NET), where an error saying assembly file was not found was thrown.

Related products:TimeXtender Data Integration

TimeXtender Data Integration 6744.1

Today, we’ve published a new release of TimeXtender (v. 6744.1) that contains the changes listed below. New Product renaming: TimeXtender desktop is now known as TimeXtender Data Integration (TDI) TimeXtender ODX service is now known as TimeXtender Ingest Service (TIS) ODX is now known as Ingest. MDW is now known as Prepare. SSL is now known as Deliver. String aggregation as an option for aggregation tables. The output will be separated by a comma and will be ordered by the content of the column Added persist view functionality, which will persist a view as a table. Introduced a filter for available data providers to improve data source selection. Added a feature that allows users to clone instances for more efficient management. Added hidden fields support in data source connection forms. In the Execution Service Configuration tool it's now possible to set up how many parallel executions of Deliver instances should be possible at a time. Default is set to 1. Improved The TimeXtender database cleanup tool can now run without the need of a data area to exist. The solution explorer in TDI client will now remember if a node is collapsed when refreshing the solution tree Changed the error message for opening incompatible instances. Environments page renamed to 'Migrations' page at /migrations and all non-transfer related functionality removed. The Instances page has been merged with the Environments page, now accessible at /instances, featuring drag-and-drop functionality for moving instances between environments. A new design has been implemented for data tables. Improved the loading speed of the organization table. Fixed Double clicking a view field in data lineage would not select the view field node Shutting down TDI (TimeXtender client) with step in the execution queue would ask you if you wish to exit, but exit would happen if you clicked No instead of Yes Resizing the column width in the TIS (ODX) incremental load dialog was disabled Dragging a table node from the semantic source database browser to the semantic tables node did not work Dragging a field node from the semantic source database browser to a semantic table node did not work The context menu for editing an environment was shown on the default environment in the TDI (TimeXtender client) The buttons for showing schemas on the TIS (ODX) ADO and OLE.DB data source advanced settings were hidden. Null reference error when validating Deliver (SSL) Qlik endpoint of the type Qlik View while using Snowflake as the source Prepare instance (MDW) Wrong label for the table distribution setting for Dedicated SQL Pool Synchronizing the Prepare instance (MDW) with an Ingest instance (ODX) would show tables as missing when the data source had "data on demand" enabled and no execution tasks were added on the data source Searching for tables for the mapping sets in the Prepare instance (MDW) would not find tables where the data source had "data on demand" enabled and no execution tasks were added on the data source Listing constraint suggestions on the Prepare instance (MDW) would not find tables where the data source had "data on demand" enabled and no execution tasks were added on the data source Adding new view fields to a standard view was blocked after selecting a field in the overview pane Some Ingest instance (ODX) execution logs sometimes went missing due to a timing issue in the Ingest instance (ODX) execution logic Potential null reference error when two users concurrently committed a change on an Ingest instance (ODX) data source or Ingest instance (ODX) task Some Semantic dialogs could only resize horizontally Tabular Security DAX was incorrect when having more than one setup Moving Ingest instance (ODX) data type override rules up and down sometimes caused an index out of bounds exception A circular reference in data lineage with history tables caused the data lineage to run forever. Label in warning and error reports dialogs shows database instead of data area When adding a private data source (ADO.NET) an error saying assembly file was not found was thrown When right-clicking on a table with a mapping type of the Integration Table, the client sometimes crashed. Updated the layout of the Deliver Instance (SSL) Role Dialog Fixed an issue where the TIS (ODX service) would get stuck in 'Starting' state if the authentication token could not be renewed. Addressed alignment issues with dropdowns and the search icon in multi select filters. Corrected the incorrect tooltip displayed for the delete instance button. Resolved an issue where the hand cursor appeared on mouse over an instance in the Environment page, but clicking did not trigger any action. Fixed an issue where it was not possible to save a prepare (MDW) instance after previously experiencing an error. Fixed a bug that caused the customer table to not refresh after deleting a customer. Resolve an issue that made it impossible to delete some data source connections because of hidden double spaces in the name. Fixed an issue where important setup properties were accidentally overwritten during updates, and ensured that the connection string is securely encrypted when edited. Fixed an issue to ensure that repository information and related secrets stay properly aligned and consistent. Resolved an issue that caused organization creation to fail due to incorrect data handling. Ensured that comments are now properly added to the activity log when removing a company and removed duplicate comments that appeared when deleting an organization

Related products:TimeXtender Data IntegrationTimeXtender Portal

XPilot v1.0

We are excited to announce the latest version of XPilot, packed with significant improvements and new features to enhance your experience. Here are the highlights: 5x Faster Responses: With a new and improved Index, XPilot now delivers responses five times faster, ensuring you get the information you need without delay. 10x More Intelligent Responses: Powered by GPT-4o, XPilot's responses are now ten times more intelligent. Additionally, XPilot now has the capability to remember previous interactions, providing more contextually relevant answers. A response not quite hitting the mark? Ask XPilot to clarify or provide a correction. More Knowledgeable: XPilot is now more knowledgeable than ever, incorporating all the latest knowledge base articles and Exmon user guides. And now even YOU! can help make XPilot smarter, as it's also trained on "answered" user community questions. Improved Usability: The user interface has been completely rebuilt from the ground up, significantly improving usability and making it easier to navigate and find the information you need. Responses now include images and GIFs when needed, making it easier for users to understand complex information. This visual enhancement aims to provide a more intuitive and engaging user experience. Code Generation: XPilot can now accurately generate SQL and DAX code, making it a valuable tool for users who need precise and efficient coding assistance. Try it out for yourself at https://xpilot.timextender.com/

Related products:TimeXtender Portal

TimeXtender Desktop 6691.1

Hotfix release of TimeXtender Desktop (6691.1) that contains the changes listed below. Fixed Desktop 21596: Can't deploy DWH with service principal authentication Fixed an issue where service principal authentication was failing when deploying a SQL data warehouse 21604: SaaS Data Selection Rules - source vs destination fields Fixed issue where the data selection rule pane showed destination field names instead of source field names in table mappings. 21641: Implement 'WITH RECOMPILE' in Direct Read procedures All data movement procedures for moving data between data areas are now created with 'WITH RECOMPILE'. This will be picked up by the differential deployment. 21673: Issue when connecting to MDW SQL Storage when it is a Azure SQL Database This has now been fixed. 21679: Duplicated advanced features on SQL/Azure DWH Fixed and issue where some Advanced DWH features were listed twice 21706: SSL instances upgrade from 1.0.0.0 to 2.0.0.0 are missing extended properties on one table Fixed an issue in the SSL repository which will not copy semantic notifications when using "Copy instance" in the portal. This results in the SSL not being able to open if references to semantic notifications are made in one or more semantic execution packages. 21650: Integrate Existing Objects wizard Fixed an issue which caused the wizard to fail when the MDW used a case-sensitive collation. Fixed an issue where having multiple objects with the same name, but in different schemas would cause errors when running the wizard. Fixed missing command timeout for the MDW when running queries.

Related products:TimeXtender Data Integration

Exmon Release 24.2

We are excited to bring you the Exmon 24.2 release with new features and improvements. Summary This release introduces unified Auth0 login for Exmon and TimeXtender, enhancing user convenience. We've also added custom webhook notifications for seamless data integration and resolved various bugs across Data Quality and Master Data Management. In this article, you will read about: Unified Login across all Products Turnkey & Data Quality (DG) Bug fixes and smaller improvements Unified Login across all Products We are excited to announce that Exmon and TimeXtender will now use the same authentication methods via Auth0. This means users can utilize their TimeXtender passwords for all new versions of Exmon. This update aims to simplify the login process and enhance user convenience across our products. Highlights: Unified Authentication: Use the same Auth0 credentials for both Exmon and TimeXtender. Enhanced User Experience: Streamline access to multiple products with consistent login credentials. Improved Security: Consistent authentication methods across platforms. Turnkey & Data Quality (DG) Custom Webhook Notifications We are excited to introduce a new feature that allows users to create data in external systems through webhooks whenever exceptions occur in our platform. This enhancement aims to provide more flexibility and automation for integrating our system with other tools and services. Key benefits include: Seamless Data Transfer: Automatically send exception data to external applications in real-time. Custom Integrations: Easily connect with various third-party tools and systems that support webhooks. Improved Workflow Automation: Reduce manual intervention by automating responses to exceptions and streamlining processes. For detailed instructions on how to configure Zapier webhooks, please refer to our guide.Instructions on how to configure webhook notifications for Data Quality, please refer to our documentation. For more details on the updated user interface in Turnkey, please visit here. We hope this new feature enhances your experience and look forward to your feedback. Bug fixes and smaller improvements Data Quality and Orchestration (DG) Users are now, not able to select workspaces as Fabric notebook execution item Tab name for user groups now display the correct type of user/user group Emails generated from tests groups/queries/compare queries now include information to the user about maximum row settings in the emails‘ excel attachment When packages are deleted they are now removed from all processes and object groups where they existed Report created through Gateway data provider can now be displayed in the web portal Support links have been updated to support.timextender.com User interface for creating an Email notification has been slightly updated Error message has been clarified when user selects invalid data provider for TX packages Exception email columns now follow all column settings configured for the test Master Data Management (DM) User who didn´t exist in Exmon Master Data Management but is a member of AD group in DM could not login to DM WEB and was not created automatically in DM desktop Icons/logos updated in a few places Database user permissions are visible to the user in settings When selecting a DG task for an action in DM, tasks of different types are not confused when they have the same id User can now be successfully added to a newly created manual group Support links have been updated to support.timextender.com Actions can execute tasks that belong to the new database schema Creating users with emails not existing before is now working Saving AAD groups without Entra Id‘s displays the correct error message Lookup fields using custom display values where data can be empty are now always displayed correctly Lookup dropdown on the web can now be successfully opened with custom value settings Using special Time format, users had issues opening details about users. This has now been fixed. Values are now aligned to the left in dropdown columns Copied text adjusted in How to connect? User interface in user groups has been adjusted Adding a user to a newly created service is now working as expected Recently used on home page has been fixed for individual users Turnkey Download links for Gateway and Desktop client have been fixed Validation has been added for data provider name Revert button for rules that have been published has been fixed Column Settings for dataset and rules have been fixed so switching between tabs will not loose the users setting User can now create rules in the automatically created workspace in a new environment When rules are deleted from Turnkey, they are now also removed from the object explorer in DG Percentage format is now shown in execution preview for rules How to upgrade? In Master Data Management, users in versions 22.5 and above can complete the upgrade themselves through the Desktop client. See the Guide. Contact help@exmon.com to upgrade from Exmon 22.4 or below.

Related products:Exmon
featured-image

TimeXtender Desktop 6675.2

Today, we’ve published a new release of TimeXtender Desktop with the following changes: New Redesigned TimeXtender Portal UI with new layout, colors, dark mode We've remodeled the Portal and given it a fresh coat of pain to enhance both the look and the user experience. The new design features a collapsible left-side menu for the features related to the data flow, while user account settings, support and admin stuff live in the revamped top menu. In addition to that, the new colors give the Portal an fresh and modern look, and on top of that, we've added a dark mode for those who prefer to turn down the light a bit. The new colors are complimented by new lighter icons and a new more readable font. In our quest for greater consistency across the suite, Exmon Turnkey have been updated to use the same colors, font and icons as the Portal. Shared login for TimeXtender and Exmon You can now use the same login for TimeXtender and Exmon (web and the desktop DG and DM products). Less hassle, and one less password to remember! However, we haven't centralized company accounts just yet, so if you're not using Exmon already, you'll still have to have an Exmon account created for you. The same applies, of course, if you're using Exmon but not TimeXtender. Keep destination settings when you transfer an instance You can now choose if you want to override security, roles and notifications in the destination instance when you transfer an instance in Environments. The first time you transfer between two instances, you must override the destination settings, but on subsequent transfers you decide. Previously, these settings would always be overridden. Map endpoints when you transferring a semantic model Related to the improvement listed above, you can now map semantic endpoints when transferring one semantic model instance to another. The endpoints must be of the same type. Previously, the endpoints in the destination instance would have been overridden Integrate existing data warehouses in TimeXtender With the new Integrate existing objects feature, you can easily use data from your old data warehouse even before you've converted it to a TimeXtender data warehouse - or if converting the old data warehouse isn't feasible. Any non-TimeXtender table that happens to be in your data warehouse storage can be integrated into the TimeXtender data warehouse instance. If you're using Xpert BI (acquired by TimeXtender in 2023), you can import additional metadata for the tables in the form of descriptions and tags. New data source providers for Excel and CSV files With the new native data source providers, getting data out of Excel and CSV files just got a lot easier. Improved Firewall rules can now be configured on the aptly named Firewall Rules page under Data Estate instead of on the individual instance's details page. This way, it's easier to get an overview of firewall rules across all instances. You no longer need to run the ODX Service Configuration tool on the destination server after transferring an ODX instance under Environments. Instead, you simply need to restart the ODX service. Listing instances in TimeXtender Desktop is now a lot faster. Service requests from user installed software will now include custom headers to ease support cases. When you're using Snowflake as data warehouse storage. aggregate tables, table inserts, and custom table inserts are now supported. When you're using Snowflake as data warehouse storage, deployment is significantly faster. You can now use Windows, Entra Password, Entra Integrated, and Entra Service Principal authentication for ODX SQL storage in addition to the existing SQL Server Authentication. You can now use Entra Service Principal authentication for data warehouse SQL storage connections. Added strict encryption support for ODX and data warehouse SQL storage (SQL Server 2022 and Azure SQL Database). Fixed Portal Optimized environment page load times. Optimized customer table load times. Desktop Jobs that were not completed did not set their state to 'Failed' after a restart. Fixed an issue where a Fabric Workspace name containing spaces would make the ODX Fabric Lakehouse unusable. On an ODX, adding an incremental rule with updates and deletes to an empty table resulted in an error. Fixed a performance issue with the CSV semantic endpoint for models that contained tables with lots of rows. Parameters would be removed from custom views created using drag-and-drop between two data areas. In the Performance Recommendations window, the info icons were not properly aligned. In the Selection Rules pane on mapping tables, some fields, including conditional lookup fields and system fields, would be missing for tables from another data area. Fixed and issue where dragging tables from a ‘TimeXtender Dynamics 365 Business Central - SQL Server’ or ‘TimeXtender Dynamics 365 Finance - SQL Server’ data sources into the ODX’s query areas would result in nothing.

Related products:TimeXtender Data Integration