Hi, I am following the tutorial to set up a sandbox environment. However, I get an error message at the end of the configuration I get an error: Can anyone help me resolve this issue?
How would the facets of Agile development, with Sprints and Ceremonies, be a part of TimeXtender projects? What are the considerations for working in teams on TimeXtender?
Jan Willem, Winvision NL asks: What is the reference architecture and flow of data from IoT devices and sensors into TimeXtender?
What are the steps to configure the TimeXtender Environment and Project Repository, after creating the App Server in Azure?
What are the steps to create an ADLS Gen2 ODX storage?
What are the steps to ingest data using ADF into the ODX?
What are the steps and considerations to use SQL DB Serverless with the TimeXtender DW storage?
What are the steps to use AAS for TimeXtender Semantic Endpoint?
What are the steps to work with the Scheduler Service, and how to effectively use Execution Packages
What are the steps to configure multiple environments in TimeXtender, such as Dev/Test?Prod?
Why and how to use Azure Synapse SQL Pool with TimeXtender MDW?
What are the best practices for working with CData ADO.NET data providers in TimeXtender?
How to explore the data stored in Azure Data Lake, outside of TimeXtender?
What are the best practices around using version control and backups in TimeXtender? How to perform a version Roll-back, per environment, if needed?
What are the steps to upgrade TimeXtender and ODX from a previous version? What about Multiple Environments?
What are the pro-tips to create a TimeXtender environment as quickly an easily as possible?
What are some tips and tricks to troubleshoot issues in TimeXtender?
I'm trying to get data from a REST api. The challenge is that the response is nested (JSON format).I did split up the seperate fields by editing the RSD file, adding something after the xPath...The result looks better than before but still all my answers are comma seperated in 1 field/row combination instead of 1 field 4 rows for example. It looks like this now:How can I make sure I only get 1 anwser per value per row? (So line 1, field q_position should be seperated to 4 rows with value 1,2,3 and 4)
When you set up primary key behaviour to 'error' you elegantly force all valid data to be unique combinations of your selected primary keys. But, how do you control what individual dataset is regarded as valid and what is discarded as error ? Is there a way to let the primary key violation error handling know, which record you consider the valid one ? (e.g. based on min/max values in non-primary key data fields)
We currently execute an SSIS package via Execute External SSIS Package. We want to start using SSIS environments to better manage dev/prod values and passwords. The problem with the Execute External SSIS Package is that we are not able to call the SSIS package using an SSIS Environment. To get around this, we created a Script Action that executes the SSIS package with an SSIS Environment, and then set this script action as a Post Script on a table that is executed in our Execution Package. The problem I am facing is that calling the SSIS package in this manor calls it asynchronously. Is there a way to synchronously call an SSIS package with an SSIS Environment?
Hello,I tried creating a new TX Dictionary project on our Tableau Server, but I'm running into some problems with opening the Tableau-file and replacing the data source.First of all, the stept to replace the datasource aren't up-to-date anymore with the new Tableau Desktop versions. In the new versions you have to open the new datasource before you can replace it. Not a big problem, just a mention so you can possibly change this.The problems come when I replaced the datasource. All sheets are empty and a few fields keep giving an error. The error states that the field doesn't exist in the database.I'll assume the missing fields are the reason for the empty sheets, as you can see here below:Can you help me with this problem?Kind regards,Damien van Kan
We would like to be able to set specific schedules for our different environments. For example som execution packages could run on schedule in our QA environment during testing. And the TXDictionary should run on schedule in all environments.
Since we talk about how easy it is to build a data warehouse in TimeXtender, is there some demo showing how to quickly create a data warehouse from data to dashboard?
Hi, In my scenario, I rely on a stored-procedure in the source (external) database to create the table which will be used in TX. The stored-procedure might have various table/view/functions dependencies. Is it possible to import stored procedure/scalar functions into ODX layer? I know it's possible to recreate (internal) stored-procedure, however I have no access to the underline script in the stored-procedureKind regards.Dror
Hi guys,Can someone elaborate the different options of table truncation? I can't really find it in the help or e-learning. We have set up an incremental load on a large table in the ODX, initially in simple mode. We found that the data after deployment & execution exists in the valid table and in the raw table, occupying twice the available storage of the original table size. We then disabled simple mode and checked the truncate raw table before transfer. I now still see some records (not all of them) in the raw table. I guess I'll have to check "Empty raw table after data cleansing" to completely empty the raw table after reloading, but I can't really grasp the difference between these options.
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.