TimeXtender Desktop Q&A
Ask questions and find answers about the TimeXtender Desktop Application
I would like to have the ability to stop a running task in the ODX server. Right now I have a task that is running and has timed out after 1h several times. I think I need to synchronize the data source and that would fix the issue, but I have no way of stopping the running task. So I have to wait for it to time out and let the synchronization be pending.
I am unable to use the Add related records on a table. I get the error message "There is no relation between table-A and the table table-B". I have added a condition where the key in both tables should be equal. I have also tried creating a relation between the two tables in both directions but no luck. There is no mention in the docs that it is needed to add a relation before adding related records. I am getting a Warning that the source table is executed after the destination table. What am I doing wrong?
Is there a way to create a many to many relationship in the Tabular model.I have one tableDataWith Fields Project , Amount I then have a project Device TableWith fields Project, Devicee.g. A, 1A,2A,3B,1B,2 I then have a project tableWith fields Project And a device tableWith fields device The issue is I want to see the data table by device and project. The main data table contains 32 millions records and the project device table contains 10,000 combinations so a workaround by creating a new fact table by creating a full outer join on the two tables might be a little inefficient , thanks
Hi, we are receiving a file where the order of the rows is important information that needs to be maintained for our transformation process. When TimeXtender loads a text file, are the rows loaded in order? (I.e. will the DW_Id in the Business Unit table act as row numbers of the source table). Thanks, Mark
I am busy with de TimeXtender Learn (TimeXtender Optimization) en there is an exercise : Right click DSA node and choose Automate > Add Suggested Constraints. De instruction is on TX 22.214.171.124, we work with 126.96.36.199. I can't find this function. Is that correct ?
Hi, I tried to execute a transfer task on my data source however I encountered the following error: The execution failed with error:System.Data.SqlClient.SqlException (0x80131904): Property cannot be updated or deleted. Property 'OwnerId' does not exist for 'object specified'. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String m
Is there an option to set a default value for a simple table such that we can fill it with some value without doing any transformations. It then also should be possible to add fields for simple tables, but that you can only fill it with a default value. It would also be helpful in the DSA.
Customer is a large user of Qlik Sense and also has Qlik ODI installed. He wants to create a Datawarehouse for his varied core data and has asked me if Timextender will add value when Qlik ODI already. And will the value add be marginal or substantial.
Hi all, We recently started developing a new data platform using TimeXtender with Tableau as a visualization tool. We are currently determining how to setup our endpoint in TimeXtender for Tableau as we have several options, each with their pro's and con's. Im wondering what choice others made and if maybe there's something we haven't thought about yet while trying to make this choice. As we see it now we have three main options: - Multiple .tds files, one for each fact table. As Tableau requires all tables to have a relation with eachother it is not possible to have a .tds file endpoint with all facts as not all facts relate to all dimensions. Pro's: smaller, efficient sources with good manageable security. Cons: Can become difficult to maintain with lots of .tds which share dimensions, for example when an attribute is added to a dimension this has to be done multiple times. When using multiple .tds sources in a Tableau sheet there's no way to link shared dimensions and thus fltering
When I have two tables, how can i make a left outer join between 2 tables. Or do i always need to use a custom view for this situation. Table 1 = Adressid = 1id = 2Table 2 = Relationpersonid = 1personid = 2Table 3 id = 1, personid = 1, personid = 2id 2 , null
Hi all, I have not been able to find any similar posts. My problem is that I stumble over an execution error when trying to map two different sources with identical columns, except one, that only exist in one source. I except data from one of the sources into the column and expect Null from the other on the lines originating from the other. But however Timextender throws error "Invalid column name 'XXXXX' " on the column at execution. Have tried full deploy. Only if I remove the XXXXX column it will execute. Any ideas?
Hi, I am a bit unclear on how to configure Max Threads for an execution package. The way I perceive it now is: if I run on 4 cores, I choose 4 Max Threads, if I run on 16 cores, I choose 16 Max Threads. i.e. N Cores -> N Max Threads. Is this the correct approach or are there other factors that need to be accounted for?
Hi all,I'll try to find out how to get rows in the _L table when I use a field validation. For field validation I can decide if I get a warning or an error information. But I only get a message in the _M like this:How can I get the rows that don't match the field validation criterion to get an entry in the _L table?
We want to extract data from the CBS (NL) this is a dataset with minimal 100.000 records and the connections allows a maximum of 10.000 records. https://dataderden.cbs.nl/ODataFeed/odata/03759ned/TypedDataset.xmlIs there a possibility to make a connection that downloads each time a set of 10.000 till all records are downloaded.
I have recently been experimenting with using .rsd files for REST API data sources. I have generated the .rsd files and then modified them to add pagination. I now have two questions: 1. What is the appropriate way to move these .rsd files to production? Just copy the files, to the production server? Or is there some way in integrate the movement with the TimeXtender transfer to production process? 2. When moving the .rsd files to production, how can we have them dynamically use the URI for the production data source instead of the dev data source? If we just copy the files from dev to prod, the files will still contain the dev data source. Do we have to manually change this, or is there a way to pass this source address using a variable? Thank you.
When we perform a transfer from the acceptance environment to the production environment, there are settings that you do not want to take with you. First the time schedule. We have an Execution package for both acceptance and production. These should run at different times. Secondly, the acceptance environment naturally refers to other TimeXtender databases than the production environment. Nevertheless, the database links are taken literally after the transfer. Third, when you accept a change in acceptance, you transfer to production. However, when you have executed these, the sources are literally copied from the acceptance. We have circumvented the latter by allowing acceptance to refer to production, but this is not always desirable. Fourth, it would be nice to be able to include project perspectives during the transfer. It would be nice if you could set the above points as settings to transfer.
Hi,I'm searching for a way to get a table insert incremental. I have a table (about 30 million records). This has a row for each hour of the day for 10.000 machines, so this table is getting bigger very quickly.I do a lot of transformations in this table. This resulted in a load time of over 10 hours. I've discussed this with our TimeXtender Solution Specialist and we've decided to split the table into multiple smaller tables. Now I do have 1 table (loaded incrementally from ODX), this table has a lot of lookups and simple transformations. This table is loading quickly because it's incrementally loaded.The result of this table is inserted in a new table (Raw table), using a table insert. This table is converting cumulative numbers to noncumulative, using the SQL functions LAG, OVER, PARTITION BY and ORDER BY. Because it is a table insert I have no idea how to do this incremental. I've set NonClusted Indexes (also on Raw table) on all fields used in the PARTION BY function. And enabled
Dear CommunityIf I'm clicking on Report->Errors or Warnings then I see a fulll list of them. Where are those records stored? In which table can I find them ?My problem is that once forgot to check the "Error" manually, therefore I want to create a script which regularly should check if errors exists.But wo far, I couldn't the table/view/place where this Infos are stored.Any ideas? RegardsAlex
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.