I’m looking to understand how Timextender Classic deals with collumn statistics that aren’t tied to indexes. Our project is built on Azure SQL DB.
Obviously TimeXtender automatically rebuilds indexes as part of its data cleansing procedures. However, rebuilding indexes only updates statistics on the indexed columns—not on the rest. Performance relies heavily on accurate statistics for query optimization (especially when MAXDOP isn’t explicitly set), this leaves me wondering:
Does TimeXtender have any built-in process or configuration to update statistics on non-indexed columns?
If not, are there recommended best practices for managing these statistics in a TX solution—either as part of the deployment or as a post-process step?
After dealing with a sudden drop in performance, we’ve observed performance improvements after manually updating statistics in the entire project. This takes 16 hours though, so I’d like to ensure we’re not missing any built-in capabilities in TimeXtender that could automate this, that we’re missing by having some logic running in views.
there are a couple of places where as a TimeXtender developer you should consider adding indexes (using TX functionality) to cover the queries used:
Aggregation tables where the grain is different from the source table
Custom views
Window functions / complex expressions inside field transformations
Stored procedures / Actions
This helps keep statistics updated for columns that are key to custom SQL. I would expect that in your case you do not need to update statistics for everything. Any table that is not incremental or historical will be truncated and rebuilt, any table that is not a large transaction-table will not add enough load to matter (much). I expect that the tables you are addressing through custom views are the ones causing you grief. I would start by analysing what indexes or tunings to indexes would be best to apply, and not rely on statistics refreshes on unindexed columns.
Another cause for sudden performance drops in Azure SQL DB can be if you scale your database up and down and are on General Purpose architecture: you will get a random CPU type allocated after you scale up. A Gen5 is very much less performant than a Gen8. You can pin your Azure SQL to the fastest architecture by choosing Premium hardware, which makes your performance stable.
there are a couple of places where as a TimeXtender developer you should consider adding indexes (using TX functionality) to cover the queries used:
Aggregation tables where the grain is different from the source table
Custom views
Window functions / complex expressions inside field transformations
Stored procedures / Actions
This helps keep statistics updated for columns that are key to custom SQL. I would expect that in your case you do not need to update statistics for everything. Any table that is not incremental or historical will be truncated and rebuilt, any table that is not a large transaction-table will not add enough load to matter (much). I expect that the tables you are addressing through custom views are the ones causing you grief. I would start by analysing what indexes or tunings to indexes would be best to apply, and not rely on statistics refreshes on unindexed columns.
Another cause for sudden performance drops in Azure SQL DB can be if you scale your database up and down and are on General Purpose architecture: you will get a random CPU type allocated after you scale up. A Gen5 is very much less performant than a Gen8. You can pin your Azure SQL to the fastest architecture by choosing Premium hardware, which makes your performance stable.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.