Solved

Removing Clustered Column Store index resulting in error

  • 15 May 2024
  • 9 replies
  • 83 views

Hi Everyone,

 

I was playing around with Clustered Column Store indexes to see if it would help increase performance of reading data from our DWH. It didn't help so i wanted to revert to the old situation by removing the index. 

However, on deploy i get an error message and i'm not sure i'm doing something wrong here.

I basically just created a Clustered Column Store index, deployed deleted it again and deployed. On the last deploy i get this error message (so the problem is in deleting the index somehow):

An error occurred during update table. See exception details for the failing object: Exception occurred during execution of RebuildHeap process. An exception occurred while executing a Transact-SQL statement or batch.. failed for Table 'MDW.FactHistStockManagementData'.
This is not a valid data compression setting for a columnstore index. Please choose COLUMNSTORE or COLUMNSTORE_ARCHIVE compression.

 

I've ran the profiler against the deployment and it seems the error is coming from this statement:


ALTER TABLE [MDW].[FactHistStockManagementData] REBUILD PARTITION = ALL
WITH
(DATA_COMPRESSION = ROW
)

 Has anyone encountered this before?

icon

Best answer by Christian Hauggaard 16 May 2024, 13:06

View original

9 replies

Userlevel 6
Badge +5

Hi @JogchumSiR 

Can you please confirm which version of TimeXtender are you using?

Also can you please share table settings i.e. is it a history table, do you have incremental load enabled, etc?

What storage are you using for your MDW? (Azure SQL db, SQL Server db, Synapse, etc.)

Hi @Christian Hauggaard ,

We are using version 6536.1.
The table setting are pretty basic. We are using batch data cleansing (1 mln batch size).
Also we are using Row based compression.

Further on, Incremental load is enabled without any deletes. There is no history enabled.

Our MDW is hosted on Azure SQL DB.

 

Kind regards,

Jogchum

Userlevel 6
Badge +5

Hi @JogchumSiR 

I am unable to replicate the issue you describe. I am using an Azure db for my DW

I have the following table

With the following settings:

I create a clustered columnstore index

I deploy the table and see the following in SSMS

I delete the index

And re-deploy the table, which completes successfully

Do you notice any differences between our setups?

As a workaround, can you please try to delete the index in SSMS and then re-deploy from TimeXtender?

@Christian Hauggaard on the performance tab of the table settings i've also enabled ROW based compression and enabled index compression on the valid table:
 

I will also try your workaround.

Hi @Christian Hauggaard ,
The workaround seems to work. The software is executed this statement and succeeded:
 

ALTER TABLE [MDW].[FactHistInventoryWarehouseCluster] REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW )

After this it recreated the PK index on the table:

ALTER TABLE [MDW].[FactHistInventoryWarehouseCluster] ADD  CONSTRAINT [PK_ab1018d7_951f_49c4_ad16_a9874f94ed48] PRIMARY KEY CLUSTERED 
(
[DW_Id] ASC
)WITH (ONLINE = OFF)


I would say with this working, there might be something wrong in the software around Column Store indexing and compression settings. Do you agree?

Userlevel 6
Badge +7

Hi @JogchumSiR ,

Clustered Column Store Indexes cannot be row compressed; the table structure is then column store compression. You could have a columnstore index combined with a table with row compression. But not a clustered columnstore index with row compression. Clustered indexes essentially define the table storage.

See also: https://learn.microsoft.com/en-us/sql/relational-databases/data-compression/data-compression?view=sql-server-ver16

@rory.smith i am aware of the combination not being valid for SQL Server, but it is configurable in TimeXtender though. Maybe you want to have extra non-clustered indexes and enable compression on them. My point being, the software let's you create a non-supported situation. 

Userlevel 6
Badge +5

@JogchumSiR I agree this seems to be a bug, I have passed it on to the product team with reproduction steps

Thanks @Christian Hauggaard and Thanks @Thomas Lind for providing a workaround!

Reply