Ask questions about TimeXtender Data Warehouse Instances
- 139 Topics
- 531 Replies
When using the Data Profile function on the Person.Person table in the AdventureWorks2014 database I see strange results for the highest and lowest occurence in the LastName field: Diaz is the highest and Billstrom is the lowest. In reality Abbas is the lowest when sorting by LastName ASC where count(*) == 1 I am not entirely sure that 'Highest occurence’ is clear to everyone that is not a natvie speaker: you could also assume this should be the highest sorted value.
I am new to TX and data lakes. I have a common problem that others have commented on and would like to know if: 1) The functionality now exists, or if it does not 2) How are others getting around the issue? The issue is that I need to be able to segregate data sources in the data lake. I am pulling in data from multiple SQL sources where the source schema and table names are the same, and I need to differentiate those tables in the data lake. By schema would be best, but if that is not possible, I need some other ideas. Thanks!
Hi, Let's assume I am importing from CSV file. The Data is very dirty and somehow I have stings in my int filed "ID" so the filed is interpreted as NVARCHAR(MAX). If I change data type of the field (right click > Edit Field) from string back to integer I receive a runtime error during the execution. What I was expecting was to see was corresponding entries in the _M and _L tables. Is this not part of data cleansing after all? We have this problem all over and could not find a better solution than to build custom views with TRY_CAST etc. where we lose lineage etc. What is the best way to clean up such data (keep valid rows, write errors/warnings on invalid rows)? We are talking 100+ tables so I'm looking for a highly scalable solution here not a one time work around. Thanks and BR, Tobias
How to write 'case' in Custom SQL, please let me know with example. Here is the situation (SQL query) that I need to perform on Date table & likes to update 'IND' column where DateValue is default column in Date table selectcase when GETDATE()<=DateValue then 'N' else 'Y' end as 'IND' ,* from <table_name> order by DateValue desc
Is there a way to use IS NOT NULL as one of the join conditions on a lookup? I want to get the TOP (1) record where a field is not null, ordered by date. Currently I have to edit the _Clean stored procedure as custom code to add it, which is obviously not the best way to do things. Thanks
Hello,So I am loading historical data into tables in TX created of Excel files. One file for property data, one file for casualty data. After some time the file format changes and they combine these two separate files into one file and add new fields.Can I create a new data source to load the new file but still use the original worksheet tab names such that the data will still load into the existing tables?Thanks in advance.P.S. I am using Discovery Hub 220.127.116.11
Hi, I have a large FACT-tables in my DWH (copy from a view in stage). I have tried to add partitions. I didn't help that much, so I removed it again.But since I removed it, the table can't be deployed. I get the below error. I can re-add the partition, but still the same error: An error occurred during drop partition function. See exception details for the failing objectAn error occurred during drop user defined function. See exception details for the failing object: Drop failed for PartitionFunction 'FactSalesOrder_History_PartitionFunction'.An exception occurred while executing a Transact-SQL statement or batch.Partition function 'FactSalesOrder_History_PartitionFunction' is being used by one or more partition schemes. Details: SQL Server: '.'SQL Procedure: ''SQL Line Number: 2SQL Error Number: 7706 I'm not sure how to find and drop what-ever TX is complaining about.Tried this:DROP PARTITION FUNCTION FactSalesOrder_History_PartitionFunction and then get this:Partition function 'FactS
I am using the Field Validation to test our ETL. Using this as source data:And these rules in the DSA:And this query to check for errors:I get the following result:The _L table is linking to the records in the _R table that were removed when they failed the validation step on FIRST_NAME. Also the Warning level notification is not present in the table.As it stands the _L, _R, and _M tables as joined above are showing incorrect or at least confusing information. Am I using them as they were intended to be used, and has anyone done anything to get around this?Thanks,Mark
I have a table with fields defined (call it Table B), but no mapping. I am inserting into this by using Table Insert (from Table A). Think Table A -> inserted into -> Table B. In the picture below Table A is Projects from D365 at the bottom of the picture. Table B is everything else in the image. DataAreaId is a field defined on Table B and is populated successfully from the table insert. I would like to have a field, Field C, that simply does a custom transformation which places the value of DataAreaID into Field C. While the Table B is fully deployed and executed, I am finding that Field C is unpopulated. In fact all fields in the image that take values not from the Table Insert directly are unpopulated meaning I also can't assign constants. So, all of the DimensionDisplayValue - xxxx fields in the picture are empty after executing. I have also noticed to my surprise that the raw table is empty. Not sure if this is by design or if something on Table B can be recon
I am unable to use the Add related records on a table. I get the error message "There is no relation between table-A and the table table-B". I have added a condition where the key in both tables should be equal. I have also tried creating a relation between the two tables in both directions but no luck. There is no mention in the docs that it is needed to add a relation before adding related records. I am getting a Warning that the source table is executed after the destination table. What am I doing wrong?
I am busy with de TimeXtender Learn (TimeXtender Optimization) en there is an exercise : Right click DSA node and choose Automate > Add Suggested Constraints. De instruction is on TX 18.104.22.168, we work with 22.214.171.124. I can't find this function. Is that correct ?
Is there an option to set a default value for a simple table such that we can fill it with some value without doing any transformations. It then also should be possible to add fields for simple tables, but that you can only fill it with a default value. It would also be helpful in the DSA.
When I have two tables, how can i make a left outer join between 2 tables. Or do i always need to use a custom view for this situation. Table 1 = Adressid = 1id = 2Table 2 = Relationpersonid = 1personid = 2Table 3 id = 1, personid = 1, personid = 2id 2 , null
Hi all, I have not been able to find any similar posts. My problem is that I stumble over an execution error when trying to map two different sources with identical columns, except one, that only exist in one source. I except data from one of the sources into the column and expect Null from the other on the lines originating from the other. But however Timextender throws error "Invalid column name 'XXXXX' " on the column at execution. Have tried full deploy. Only if I remove the XXXXX column it will execute. Any ideas?
Hi all,I'll try to find out how to get rows in the _L table when I use a field validation. For field validation I can decide if I get a warning or an error information. But I only get a message in the _M like this:How can I get the rows that don't match the field validation criterion to get an entry in the _L table?
Hi,I'm searching for a way to get a table insert incremental. I have a table (about 30 million records). This has a row for each hour of the day for 10.000 machines, so this table is getting bigger very quickly.I do a lot of transformations in this table. This resulted in a load time of over 10 hours. I've discussed this with our TimeXtender Solution Specialist and we've decided to split the table into multiple smaller tables. Now I do have 1 table (loaded incrementally from ODX), this table has a lot of lookups and simple transformations. This table is loading quickly because it's incrementally loaded.The result of this table is inserted in a new table (Raw table), using a table insert. This table is converting cumulative numbers to noncumulative, using the SQL functions LAG, OVER, PARTITION BY and ORDER BY. Because it is a table insert I have no idea how to do this incremental. I've set NonClusted Indexes (also on Raw table) on all fields used in the PARTION BY function. And enabled
Dear CommunityIf I'm clicking on Report->Errors or Warnings then I see a fulll list of them. Where are those records stored? In which table can I find them ?My problem is that once forgot to check the "Error" manually, therefore I want to create a script which regularly should check if errors exists.But wo far, I couldn't the table/view/place where this Infos are stored.Any ideas? RegardsAlex
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.