Ask general questions
Hello Support,We are receiving the following error in one of clients TX environment version 184.108.40.206 when attempting to activate using offline license file. If you need assistance, please submit this with your ticket.License code cannot be empty.Details:License code cannot be empty.Module: timeXtenderSystem.Exception at TimeXtender.DataManager.TXWizardController.DoNext() at TimeXtender.DataManager.TXWizardDialog.btnNext_Click(Object sender, EventArgs e)Time: 2023-09-28 10:45:28UTC: 2023-09-28 14:45:28Title: Discovery Hub 220.127.116.11Application: 18.104.22.168Repository: 22.214.171.124 (in Azure)SQL Server: Microsoft SQL Azure (RTM) - 12.0.2000.8 Sep 18 2023 12:22:37 Copyright (C) 2022 Microsoft CorporationUser: freshtxsrvDomain: NAOS: Microsoft Windows Server 2019 DatacenterOS version: Microsoft Windows NT 6.2.9200.0Machine name: xxxxx013CPU count: 2Build: 64 bit
Hi community,I have a question about the Temp DB in SQL Server. We have a customer where the TempDB is growing rapidly, the disk is expanded multiple times. Although it is not a problem to expand this disk again for this customer. I want to understand how we can avoid the TempDB growing rapidly?I do not have a DBA background.The TempDB for this customer is 40 GB at the moment. All ‘heavy’ tables are loaded incrementally and batch cleansing is enabled for these tables. I have some questions:I know batch cleansing is preventing the logs will grow rapidly, but does it also have impact on the TempDB? Are there any other table settings in TimeXtender can help reducing the TempDB size? Does the number of threads in a execution package have impact on the TempDB? Are there any best practices in determining the correct/optimal disk size for the TempDB? Are there any troubleshooting ideas/tools for this?Any other suggestions?
We have multiple TimeXtender projects that use the same databases for their ODX/DSA/MDWs. What is the best way to use tables from one project in another one of the other projects? "Add External SQL Connection" seems to be an option, should this still be used if the table we want to connect to is in the same database?
Good morning,What is the status of the upgrade wizard or tool to migrate TimeXtender to the new version 6xxx?We are waiting for it for quite a long time now and the only status i can find in this community is from 4 month ago that it still is not available yet.Customers are even beginning to think about other data estate solutions becasue of the silence from your side and the uncertainty of when it becomes available.Can we please have more clarity on the timelines regarding this tool?
Hello to all the community!😃When I make any change to any table and run deploy and execute, at the end of the procedure, I find he table with very little data compared to what should be present (example out of 90.000 records it loads only 4.100). I tried to check the various settings but I don't see anything wrong, can you give me some advice? TimeXtender vers. 126.96.36.199 Thank you in advance
HelloAt the moment we are processing our MDW datawarehouse to an Azure SQL Database and then we copy data to Snowflake.We want to change this to Snowflake directly however only when it is possible to fully deploy to Snowflake (transformations included).I know it is hard to give an exact timeline when this becomes available but do I need to think in a couple of months or in a couple of quarters or is this even low priority at the moment?Looking forward to your response,Greetings, Roy
After saving a version of timextender, probably we where two person who incidentally did it at the same time, we run into "out of bounds index/array".Now we can’t open the latest version and when trying to deploy an earlier version we get the “out of bounds index/array” anyway.The environment is all in azure so we have the repository database accessible.Would it be possible to just restore the database to a point in time before we hade the incident.I talked with a colleague and he recommended using this community for advise.
Hi,I am using legacy TimeXtender 188.8.131.52, and am attempting to set up email notifications for execution package failures.I have been given the information for the destination (email load balancing relay):IP: 150.xx.xx.1Port: 25The destination firewall supposedly has all necessary ports open, although I don’t have direct access to check this. When I test the notification, I receive the error “The SMTP server does not support authentication.” (see the picture below). The app registration on Azure has the following API permissions (all delegated and granted admin consent):Microsoft Graph User.Read Microsoft Graph Mail.Send Microsoft Graph SMTP.Send I haven’t been able to find any information about the error message and what could be causing it. Any help or pointers for troubleshooting this further would be appreciated. Best regards,Pontus
Hi all!We are wanting to move the TX DEV environment to the PRD server. We are using 6221.1. All I could find is https://legacysupport.timextender.com/hc/en-us/articles/115001313583-How-do-I-move-a-project-from-one-server-to-another-. However this doesnt seem applicable to the new version. What is the best way to approach this in version 6221.1?Kind regards,Maarten
I recently had a project get corrupted, and I had to revert back to a prior version. When I did this, it appears as though some aspect of the metadata got scrambled. When I open the project, all of the tables and other objects show up. But when I go to deploy any table, regardless of red/black state, the database throws an error "There is already an object named <mytable> in the database." It is as though TX is sending a CREATE statement instead of an ALTER. I can manually delete the table(s) and related function to make this error go away for my one table, but I have hundreds of tables, some of which have many months of history. All of my tables show up in the treeview controls, however when I use the SQL Database Cleanup Tool, none are listed in "Current Project Objects". They are all listed in "Unknown Project". Is there any way to recover from this without having to drop all tables and deploy/execute?
Not able to deploy after upgrade - Failed to connect (Azure Managed Instance) - Method Not found Exception
Hi,We have upgraded our TXT environment from 20.10.14 to 20.10.43 last week.Install and repository/existing projects upgrade went well.We can save the project. The new scheduler service runs correctly. Execute action is ok. However, we encounter an exception during table deployment task (BU/STAGE or DWH table):Failed to connect to server XXX (Azure Managed Instance)Method not found: 'System.Net.Http.Headers.HttpResponseHeaders Microsoft.Identity.Client.MsalServiceException.get_Headers()'. The connection settings (Repository/STAGE/DWH) seem correct (positive test connection): same Azure Managed Instance.We have checked the .NET framework version - it seems recent 4.8.03761Thanks for your help. Best regardsMatthieu
We're experiencing projects to open very slowly using TX 20.10.38 (we're above version 10000 of the project) Is there an easy way to remove old versions of a project ? We are using multi environment with global databases, and don't want to go through the export, create new repo DB and import hustle.We're also trying to delete the logs, but deleting them from within TX seems to take forever.
What would be the recommended compute tier for a project's repository database? (TX v20.x) I believe the current default on Azure is General Purpose 2vCores, however looking at the utilization this seems to be excessive. Does anyone have experience with running the repository on a Standard DTU database? Thx
Good afternoon, Great to see the OneLake part of Microsoft Fabric with parquet files. I wonder from who they got the idea :-) Shows TimeXtender made good strategical choices. I do wonder how you guys see MS Fabric compared to TimeXtender.As I see it, it would make sense that in the future you also have the option to deploy and execute your datawarehouse to OneLake (and maybe the ODX as well). What;s TimeXtender vision on MS Fabric ?
Hello TimeXtender, Yesterday we've upgraded our PROD environment from 20.10.25 to 20.10.43. Install went fine and upgrading of the repository as well. However, when we went to validate the upgrade by carrying out a deploy and execute of an arbitrary table in the DSA we received the following error message: The type initializer for 'Microsoft.Data.SqlClient.InOutOfProcHelper' threw an exception.Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation, Version=184.108.40.206, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified. Do you have any experience with this error message? Might be good to note that we first carried out the upgrade on our DEV environment and we did not receive the same error message there. We've also tried to rule out that the error was table specific by carrying out a deploy & execute on different tables/layers, but the error persisted. Best regards, Luuk Bouman
We are still using the legacy version of TimeXtender while we are waiting for the migration tool. Our new Data Engineer needs to learn how to use this version of Timextender (220.127.116.11) and he does this by following the old Discovery Hub training courses.https://timextender.teachable.com/courses/enrolled/240391https://timextender.teachable.com/courses/enrolled/217568https://timextender.teachable.com/courses/enrolled/259790 However the links to the videos in these courses are no longer valid, for example https://youtu.be/DuS0VvGZ1aE in the course Follow Along: Working With Multiple Users - 4:15 | TimeXtender Academy (teachable.com). Are these videos available somewhere else so he can follow the legacy training courses properly?
Hi,We wonder if someone of you already ran into this issue, and perhaps have an idea on how to solve it ?Setup :TX 20.10.38 using Azure SQL DB for DWH and ADLS gen2 for ODX Server. ADF used to load data from ODX Server into DSA. All databases have private endpoint, so SelfHosted Integration Runtime is used by ADF.Issue :From time to time (random - not always on the same tables - for both incremental and full load tables) we notice that the ADF pipeline takes the normal amount of time to load data from ODX to DSA R-table. However the cleansing takes 0 (zero) seconds and the record count log reports 0 (zero) records in the R-table. ADF nor TimeXtender Execution report an error.In the ODX Service Log we often notice this error : Failed to encrypt sub-resource payload… (not on exact the same moment as the execution - so not sure it has something to do with the issue)Work-around : ADO.netWe had a simular issue last year and started using ADO.net to load data from ODX Server into DSA. We
Hello Im trying to upgrade from Discovery Hub 18.104.22.168 to Timextender 20.10.41When I try to configure the ODX, Im asked to provide a client secret from portal.timextender.com/odx. However I cannot login on that page using my credentials Furtheremore, Im following this guide Upgrade ODX and TimeXtender from a previous version – TimeXtender Support , but is there anything I should be aware of, when upgrading such an old version?
There's loads of documentation about AAS and PowerBI Premium tabular models being mostly identical in use. Currently we are using 2 expensive AAS servers to host our development and testing semantic models. Transferring these 2 environments to a PPU environment within PowerBI would save allot of money. Within the current version of TimeXtender there is a separate option to select a Premium Tabular model instead of AAS. This is something the legacy version does not have. But why would TimeXtender need to know the difference. In working with either Premium or AAS their XMLA connection is exactly the same. Setting a migration from AAS to Premium and then simply exchanging the links within the environment properties seems like a simple enough plan. As i found in the following link the important part would be having an updated library to do this data transfer. https://learn.microsoft.com/en-us/analysis-services/client-libraries?view=azure-analysis-services-currentOther than this i don't see
Hello there, we’ve been experiencing intermittent slowness in the TimeXtender UI within the past few weeks. Even doing simple things like right clicking on a execution package to view the logs, trying to see the status of an ODX transfer job, etc. will sometimes cause the UI to feel like it’s just not responding (no lotus status UI) for 30 seconds to minutes.This isn’t something that happens all the time and we haven't been able to nail down a root cause. Our VM where the UI is running is rarely over 30% CPU usage and we usually have over 80% memory free. Same goes for the VM that the ODX is running on. How do you even begin to troubleshoot this when the UI is slow to respond? The TimeXtender UI is on version 22.214.171.124. The TimeXtender ODX server is on version 20.10.37Thanks!
Dear Community, I have a few questions regarding transferring to another internal server and if you could help me with any of these questions help is appreciated.We have currently two servers running on SQL Server 2016, one server has the DEV and ACC environments and the second server has our PROD Environment (TimeXtender 126.96.36.199).The environments were installed by a supplier but their external employee left, so we want to try and install this on our own (with snapshots/backups in case something goes wrong).We want to try and move the DEV/ACC environment to a new server which has the same TimeXtender version but SQL Server 2019. For this move I have 3 questions:Can I just export and import the project from old server to my new server and load the data again into a new SQL database? How can I set up my environments in such a way that my DEV → ACC → PROD traject isn’t changed. I could not find those settings and had trouble finding them on the legacy page. As the DEV/ACC environments
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.