20.10.1 - Initial Release
New and Improved Features
- Data Lake-optimized ODX storage implementation including file format change to Parquet
- Automatic incremental load from the ODX into a data warehouse
- Alerts and e-mail notifications on critical errors in the ODX service
- New ODX tab where you can browse the data storage, see information on storage on the table level and select tables for the data warehouse
- Data source explorer with functionality for confirming that selection rules, incremental load rules etc. work as expected
- Independent synchronize and transfer tasks to make task schedules more transparent
- Project lock to prevent multiple ODX servers from using the same ODX project
- Safe shutdown of ODX server to make upgrading easier
- Improved logging UI and less logging of redundant information
- Ability to connect to initializing ODX server from the TimeXtender application
- 7021: Unable to remove pending tasks in ODX Execution Queue
- 7821: SQL Server Logins results in error on Azure SQL DB
- 8403: SSL: Format string defaulting to Invariant Language
- 8575: Project variables object null reference when edited without opening the script editor
Fixed in 20.10.2
- 8765: Cannot deploy primary key delete SSIS package when the data source is bit specific
- 8767: 32/64 bit execution engine stalls on deployment and execution when failing on initializing communication.
Fixed in 20.10.3
- 8811: Data Source Excel leaves out columns with type LongVarWChar - text column with more than 255 characters
- 8863: Error generating Super Natural Key on SQL Data Warehouse
- 8889: Oracle slow synchronization - affects all bit specific data sources
- 8892: Timextender memory leak - an issue on data source explorer
- 8925: SQL Data Warehouse has slow performance with identity insert, when using insert into a table - Data Cleansing
Fixed in ODX 20.10.3
- 7884: "Cloud Repository is corrupt" is reported if the ODX is unable to reach the cloud repository
- 8850: "Specified cast is not valid" exception is thrown when the backlog was successfully created but the firewall blocked the request to add a project
- 8907: Azure Data Factory transfer to Data Lake can give an error: "Token has expired"
- 8911: Incremental load on decimal datatype is not working
Fixed in 20.10.4
- 8980: Qlik Sense terminate execution issue
Fixed in ODX 20.10.4
- 8551: Oracle fails when using date fields as incremental load in ODX on OLE DB version
- 8907: ADF to ADLG2 token expiry issue
- 8959: Parquet and datetime2 issue
Fixed in 20.10.5
- 9017: Execution package with retries can return "Broken pipe" error
Fixed in 20.10.6
- 9039: Issue with execution package with retries
Fixed in ODX 20.10.6
- 9033: ODX Parquet transfer can use all memory on large transfers
Fixed in ODX 20.10.7
- 9118: ODX upgrade issue
Fixed in TimeXtender 20.10.8 and ODX 20.10.8
- 9115: Table transfer from ODX to DWH fails when ODX table contains a DWH system field name
Fixed in 20.10.9
- 8659: Error when opening the errors menu when a table contains Geography data
- 8719: ODX tab closes on F5 (refresh)
- 9238: Having a Tag on a field mapped to ODX and ODX synchronizes will cause an error
Fixed in ODX 20.10.9
- 9259: Issue with setting process affinity for more than 16 cores
- 8266: ODX Config allows you to enter project names of invalid length
Fixed in ODX 20.10.10
- 9274: Azure access tokens are sometimes not refreshed and expire after an hour causing transfers to fail
Fixed in 20.10.11
- 9289: Database cleanup recognizes Semantic Security Table after schema change
Fixed in ODX 20.10.11
- 9287: Azure access token timeout when transferring from Lake to DWH
- 9337: ADF SQL source incremental load value loses precision for datetime2 where precision is above 3
- 9344: Incremental load loses precision for datetime data type when the source is an ADF type
Fixed in 20.10.12
- Numerous issues with windows and UI elements, that did not scale correctly with display scaling set to more than 100%, has been fixed. The application should now be fully DPI aware and usable on modern systems that default to a higher display scaling factor.
- 9228: Increase TIMEXTENDERTABLESCHEMA.FIELDHELPTXT to nvarchar(4000)
- 9428: NAV BC365 wrong conversion of DATE data type
- 9453: Nav query table - data type varbinary becomes unknown data type
- 9481: TimeXtender crash if you rename an execution package to an existing name
Fixed in ODX 20.10.12
- 8969: ODX SAP DeltaQ delta load
- 9291: ODX get unresponsive when having a lot of execution logs
- 9457: ODX - temp folder is used for generating file names and will eventually get filled up
Fixed in 20.10.13
- 9001: Application secret is now obscured in the Global Database settings of the DWH
Application secret in the user dialog for Global Database setting for DWH was in clear text
- 9538: SQL DWH - Conditional lookup is now correctly cast to the destination data type
The Data Cleansing script for MDW Tables on Analytics SQL Pool was missing an explicit data type cast on conditional lookup fields when the Lookup Aggregate option was set to None. This is important due to the use of CTAS pattern and table switching employed on this specific platform.
- 9550: Updated logic to support table inserts from views on Analytics SQL Pool
When inserting data into a table from a view the databases on the Analytics SQL Pool platform do not support default values. Therefore the script has been adjusted to include getutcdate() as the value for [DW_TimeStamp] if such a column does not exist in the view.
- 9539: SQL DWH - Grouped None aggregated conditional lookups would look up the first lookup field for all lookups
The Data Cleansing script for MDW Tables on Analytics SQL Pool with multiple lookup files would only lookup the first field when the Lookup Aggregate option was set to None.
- 9575: Supernatural keys based on transformed values now works on SQL DWH
The Data Cleansing script for MDW Tables on Analytics SQL Pool has been adjusted to apply custom transformation before applying supernatural keys. Before this, the script would result in an empty insertion.
- 9596: Custom Semantic Measure dialog can now be resized and maximized
The dialog could not be resized and did not have a maximize option
Fixed in ODX 20.10.13
- 8778: Improved messages at empty ODX Data Source Sync
Reformulating the system messages when setting up data sources and synchronization to help users take proper action when synchronization returns with empty results. It now includes a hint that filters on the data source could be too restrictive.
- 9033: Added an option to reduce memory consumption when transferring data from parquet files to SQL MDW through the ODX server
Limit memory consumption by subdividing parquet extraction into multiple column groups.
- 9291: The ODX can now handle a lot of execution logs without becoming unresponsive
Dialog updated in wrong thread to inform the user of excessive log messages available.
- 9372: More robust error handling to fix issue with unsuccessful transfers from ODX to DW
Building ODX failure handling more robust in case a data source transfer completed unsuccessfully, leaving the file Model.json in an invalid or missing state. The new routine will try to reestablish the old file or seeking out the last working version of the data source before the failed transfer.
- 9556: Improved incremental load on ODX to work with string data type
ODX generated extraction script now applies the right MAX criteria to the query when extracting an incremental load based on a string data type. This would generate empty extractions previously.
- 9561: ADF Date datatype and Synapse transfer error
ODX now identifies Date types as a native parquet data format and generates the proper table format for Polybase transfer in Analytics SQL Pool MDW.
- 9580: ODX can now creating a parquet file larger than 2GB
The variable containing the file position of the parquet file when we upload the data is an integer, but should be of type long. This meant that a file larger than 2147483647 bytes would overflow to a negative value and the upload will fail.
Fixed in 20.10.14
- 8537: Improve display of very large custom transformations etc.
Very large custom transformation is now handled in the user interface to ensure a fast and stable workflow. This is done by limiting the quick tooltips.
- 9442: Faster repository loading through the dialog for administration of repositories
The script for fetching the repositories and matching versions have been optimized to allow for faster retrieval and the general timeout setting for the repository has been added to this command execution too. This will allow even larger version history to be easy to load and maintain in the dialog.
- 9456: Optimizing the appearance of the dialogue Get Stared
With certain display setting the dialog would appear exceptionally large. This is optimized to ensure at more proportional displaying.
- 9603: Fully qualified names in Custom measure script in Shared Semantic Access Layer
In some instances the fully qualified names was not displayed in the custom measure scripts even though it was set to do so. This has been improved to handle these instances and insure the proper code is generated for the endpoint.
- 9654: Improving data cleansing script generation for custom transformations
The situation with similar column names in lookup tables could generate invalid transformations scripts for databases running in Synapse Analytic SQL Pool. Proper aliasing is now applied to the script to allow for this situations to be handled safely too.
- 9658: Any source OLE DB and Any source ADO - improved adding and editing filters
In some cases adding and editing filter models on this data source would not save the changes. This is now improved to safely capture all your changes.
- 9659: Removed a programming glitch from Resume Execution feature
Updating the UI produced an error that effectively make resume execution impossible to perform.
- 9691: Drag and drop fields now supported for Data Export tables
You can now drag and drop fields in the selection rules dialog
- 9695 Adjusting dialog input to actual field sizes
Input validation have been adjusted to ensure proper data length and eliminate issues due to overflowing in some extreme case.
- 9708: DB/2 data sources (IBM Managed) would produce error when trying to connect.
This have been changed to avoid unnecessary connection attempts that would result in errors.
- 9725: Updating a project variable refreshes displayed custom transformation automatically
When a custom transformation on a table field uses a project variable a value change in this variable is now instantly displayed in the user interface for this custom transformation.
- 9726: Allow long running cleanups of old metadata extractions
On large data sources with the application now allows the clean up process to be performed without timing out and producing an unspecified error.
Fixed in ODX 20.10.14
- 8504 Added extra info on dialog for Scheduled Tasks
The name of the data source have been added to help the user identify the individual tasks in the dialog. This have become even more relevant now that the requirement of uniqueness for task names have been lifted.
- 9663 ODX supports passthrough of data type Real
Using ADF to populate the data lake now supports the data type Real natively in Parquet files. This makes it possible to move this data type directly in to Synapse Analytics SQL Pool via Polybase for high throughput cases.