TimeXtender Desktop Q&A
Ask questions and find answers about the TimeXtender Desktop Application
Hi team I have the following tableAnd I have to create this transformation on incidentGroup Column CASE WHEN [incidentTypeID] >= 100 AND [incidentTypeID] <= 199 THEN '1XX-Fire' WHEN [incidentTypeID] >= 200 AND [incidentTypeID] <= 299 THEN '2XX-Overpressure, Explosion, Overheat(no fire)' WHEN [incidentTypeID] >= 300 AND [incidentTypeID] <= 399 THEN '3XX-Rescue & Emergency Medical' WHEN [incidentTypeID] >= 400 AND [incidentTypeID] <= 499 THEN '4XX-Hazardous Condition(No Fire)' WHEN [incidentTypeID] >= 500 AND [incidentTypeID] <= 599 THEN '5XX-Service Call' WHEN [incidentTypeID] >= 600 AND [incidentTypeID] <= 699 THEN '6XX-Good Intent Call' WHEN [incidentTypeID] >= 700 AND [incidentTypeID] <= 799 THEN '7XX-False Alarm & False Call' WHEN [incidentTypeID] >= 800 AND [incidentTypeID] <= 899 THEN '8XX-Servere Weather & Natural' WHEN [incidentTypeID] >= 900 AND [incidentTy
At one of my clients we are using the ODX and they set it up that for every different type of load they added the data source multiple times. So for the normal full loads there is 1 source . For the incremental loads of the same source there is a different source.In the data source selection the include only the tables they need for the full load. and they do the same for the incremental. But from time to time task wil not execute without showing up in the execution log without an error. I have the feeling that because the includes je indirectly exclude the other tables and if they are running at the same time this cancels each other out. In the windows logs you can see that the package then is skipped but it's no where to be found in TX.Does anyone have an explenation about this and/or an idea how we can fix this?My idea was that maybe putting everything in 1 source and then create 2 different transfer task instead of 2 different sources would fix this problem. Would love to hear from
Hi all,For one of my current client's we are rectrieving a warning while syncing the odx. The task is completed and we can transfer the data. but the warning is a bit odd.We already tried to remove the folder from the data lake and try a full load. but this didn't fix the warning. Anyone any ideas?
Hi team I am beginning work with a new client, I created environments development, UAT, and Prod, and after I created Global DataBases DSA and MDW for now.My question is, How I have managed the projects on Global DataBases? Only can I create one project when using Global DataBases or Can I create a lot?Thanks for your helpIgnacio
We are running an environment on two servers. One DEV and one PRD server. The instances are copied from the DEV to PRD server, both ODX and MDW. Apparently the jobs are also copied. After refreshing the list I'm adding jobs to run the production ODX. As we can see below I can choose between both the DEV and PRD ODX's : However. If I select the ODX Just Brands Production / JB NAV / Full Load NAV. It jumps back to ODX Just Brands / JB NAV / Full Load NAV, running the DEV ODX. Why is this?
Our machine runs in UTC+2.00, whilst our Azure databases (by default) run on UTC time. Now when we schedule an ODX Transfer task, the interface let's us schedule these tasks in UTC time. The Execution logs of a transfer task are also in UTC time. However, schedules for execution packages are in local time, UTC 2.00, and therefore there logs are also in UTC 2.00. I guess my question is, why is this the case? Both the ODX & Execution service are services on the same machine with UTC 2.00. Both the ODX as well as the MDW and the project repo are azure db's in UTC. Why would one use the machine time and the other the azure db timezone? And is there a way to make them the same other than resetting the machine time to UTC? Version 20.10.37
Hello!I want to experiment a little and check if changing my Max Threads setting can improve execution times. I have one big package which includes multiple other packages as Steps.If I change the Max Threads setting in the main package, do the packages from the included steps inherit this setting, or do I have to manually change the setting for each included step?Side question; I have plenty of data for execution times at 4 threads. My plan is to do three runs at 3, 5 and maybe even 6 threads and compare the averages. Is that a good setup? Thanks.
TX: 6284.1Is there a reason why Lineage generated from the Data Source Explorer in the ODX Server includes the downstream stuff in a DWH instance whereas generating Lineage from a field in a Data Area in the DWH instance does not show the ODX part of the lineage?
Is it absolutely necessary to perform a transfer between environments after an upgrade?Dev, QA and Prod are all sitting on their own server. I have upgraded all 3 to the latest TimeXtender version and all Deployed and executed perfectly. Would I still have to do the “Multiple environment Transfer” or is this not necessary?Upgrade Dev - Deploy, Execute, & Validate all is working. Upgrade Test - Deploy, Execute, & Validate all is working. Transfer Dev -> Test - Deploy, Execute, & Validate Test to ensure all is working. Upgrade Prod - Deploy, Execute, & Validate all is working. Transfer Test -> Prod - Deploy, Execute, & Validate Prod to ensure all is working
Hi,I have added a new field “PCD_Prev” to my table and populated this field using the LAG function (Custom transformation) Like so:LAG([PCD], 1, '0') OVER (Partition BY [AdministrationID], [ApplicationID] ORDER BY [AcceptanceID] )Now I try to use this field in a case statement of another field, again using Custom transformation, like so:COALESCE(SUM(CASE WHEN [PCD_Prev] in ('0', 'ka_aa') THEN 1 WHEN ([PCD] = 'ka_hb' AND [PCD_Prev] = [PCD] ) THEN 1 ELSE 0 END ) OVER(partition BY [AdministrationID], [ApplicationID] ORDER BY [AcceptanceID] ROWS between unbounded preceding and current row ), 0)I get an error that I can't use a window frame in another window frame although I don't do it in the same transformation.. Is this standard behavior? Do you have a solution for this?In SSMS I add the new field in the FROM clause that is adding at as an additional field to my existing table in the FROM clau
Hi,We are currently trying to figure out the new gen of Timextender, and I’m finding the setup with jobs a bit lacking. Perhaps I’ve just missed something, but here are a few things that bug me. As far as I can understand, the way to go when scheduling in the DW is to set up your execution packages in the execution tab. Then we need to set up a Job that schedules the execution packages. In my trials I intentionally set up a package to fail, and first of all the monitoring view of jobs do not show any information about the error: Jobs monitoringIf I go into the execution log, I can see an error message that essentially just says that the job didn’t succeed.Execution log for test jobFor debugging that means I have to check the contents of the job (that potentially could have multiple execution packages in it) and then head over to the execution tab to check the log for the package, where I see all the details.Execution log in execution tabI would think it was nice to be able to reac
In this screen:It’s not possible to delete a member. The only way it’s delete the “RLS Setup” and create it again. It’s crazy.. Another crazy situation it’s that if you define a Dynamic RLS Setup and press ok and enter to to see it, it’s lost...
I’m facing this issue:This package is scheduled to run every half an hour. On rare occacions the package executes itself as shown on the picture to the left (blue rectangle) and the execution log shows both executions as successful. We would like to know any insight to help us solve this issue. Some things to consider:At the customer’s request the package is being run from the Task Scheduler. The execution log does not show any errors as shown in the image above (green rectangle) This is how the retries are setup: The task scheduler showed this warning: We got this email throwing an error. However, the "network-related error” is nowhere to be found since according to the execution log it ran successfully:
Hi all,I have some customers that are reloading data every x minutes in ODX Server (V20.10.x). This generates a lot of execution log entries. When I open the Execution Log in the ODX Server, it takes a long time. I think that removing old execution logs will improve the speed of opening this execution log overview window. I have no idea if it is possible to delete old execution log entries?Another option could be that the execution log view first shows the datetime range you like to see logs and then opens the overview.
Hi,We are unable to set up Notification on Critical Errors for the ODX Server. Both service and database for the ODX is on prem. The Exchange Server is deployed as a hybrid. TimeXtender version 6221.1This is the error message: Setting up notification for the executions in the MDW’s and SSL’s is no problem. No authentication is required by the Exchange server.BRAnders
Hi all, at a client we have encountered a curious situation with execution e-mail notifications, hopefully you will be able to help us figure it out.There are 5 scheduled execution packages for BU's that run every morning. Since two days we are getting e-mail notifications saying these have failed, but when we look in in TimeXtender they in fact have executed successfully.Each BU execution throws a different connection error message:ERROR [HY000] [MySQL][ODBC 8.0(w) Driver]Can't connect to MySQL server on 'XXXXXX' (10060) Unable to connect to any of the specified MySQL hosts. The remote server returned an error: (400) Bad Request.  Could not execute the specified command: HTTP protocol error. 500 Internal Server Error. ERROR  could not connect to server: Connection refused (0x0000274D/10061) I will take the top one as an example to show you how it looks. Attached is the full fail notification e-mail.This package includes has the following settings: This is what the executi
Hi, I have a SSAS tabular model that is deployed to Azure. The model executes fine most of the time and it take around 3 minutes to complete but sometime it takes long time to finish the execution (running for 12 hours but still not completed, so killed the process instead). So, is there any way to failed the execution if takes over 1 hour? I cannot see any timeout setting for SSAS tabular. I am using version 188.8.131.52Thanks.
We are facing a performance issue while fetching the data from D365 F&O using CData provider. While fetching a data entity with around 700K records, It was taking around 1:20 mins, now when we execute it, its running for 10+ hours with no response. While executing we noticed a SQL session with bulk insert command from TimeXtender with suspended state. and on the D365 database there are multiple session of batch process to fetch the data based on partition number. Since we are witnessed multiple sessions keeps executing, we would like to understand how the batch size is defined. And how we can customize the batch. Or any other way to set the connection properties which pull’s the data faster.Query for Ref :FROM Table_XXXXXXX T2 WHERE ((((((T2.PARTITION=5637144576) AND ((T2.PARTITION#2=5637144576) OR (T2.PARTITION#2 IS NULL))) AND (T2.PARTITION#3=5637144576)) AND (T2.PARTITION#4=5637144576)) AND (T2.PARTITION#5=5637144576)) AND (T2.PARTITION#6=5637144576)) )T1 WHERE ((T1.rowNumber>
Hi team, TimeXtender allows adding parameters from a different table to a custom field in a semantic data model (Qlik). The resulting syntax/qlik script combination is always broken.When using adding a custom field parameter from a different table, TimeXtender fully qualifies the Qlik syntax regardless of the settings. The resulting syntax on the Qlik side will no longer match the syntax in the views created by TimeXtender:Qualified setting:Fully qualified setting: The resulting Qlik Script:"Sales_Targets":LOAD"KPI", "Target", "DIM_Boekdatum.DayName" AS "Test";SQL SELECT"KPI", "Target"FROM "Test"."dbo"."Test QVD_SLQV";But the view has the following syntax:CREATE VIEW [dbo].[Test QVD_SLQV]-- Copyright 2011 timeXtender a/s-- All rights reserved---- This code is made available exclusively as an integral part of-- timeXtender. You may not make any other use of it and-- you may not redistribute it without the written permission of-- timeXtender a/s.ASSELECT [KPI] AS [KPI] ,[Target] AS [Targ
I’m importing a table which is supposed to have an integer primary key, but the data is messy and we find things like “6TEST3” or “CREDIT3” in this field. I want to read the table and use this field as an integer, however can’t convert it because of those rows. Is there a way to remove a row if I can’t convert a field value to integer?
In v20.10.43, I’ve come across a bunch of auto-generated views with a _BCV suffix.Are they used for merged (incremental?) loading in the next layer, or what’s the purpose of these views?Why do they need to be redeployed whenever I do a managed deployment?And why does the SQL Clean Up Wizard delete them when they’re being used?(I haven’t tested if this could happen in a v6284.1 MDW)
One of my clients has a file data source that arrives at unpredictable times of the day. Has anyone found a good way to external-trigger-load such data source when it arrives? E.g. Webhooks, Logic Apps, ...The question is for 20.10.43, but I’m equally interested in hearing if there are solutions for v6284.1 as we’ll be moving in that direction.Thanks!
Hi all,At a client I'm loading data from a delta lake in Azure via Azure Synapse views. The connection in ODX server is an Azure Data Factory - SQL Server (10.1.0.0) connection (X version 20.10.40). All tables have a primary key defined and each table has an incremental timestamp field. This field is named the same across all table so only 1 rule is needed for all tables: I've created an incremental transfer task and started it to do the initial load of the tables. I've noticed that about 20% of the tables remain empty when loading data to the DSA. For these tables I'm seeing in the ODX data lake that they have the following structure in the table folder:For this table the DATA folder is somehow missing which explains why no data is loaded to the DSA. The table does contain data in the source.I've tried the following to see if the DATA folder would appear with a data Parquet in it:Executing a full reload for the table → This results in the same structure as in the screenshot with no D
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.