Skip to main content

Hi community,

I have a question about the Temp DB in SQL Server. We have a customer where the TempDB is growing rapidly, the disk is expanded multiple times. Although it is not a problem to expand this disk again for this customer. I want to understand how we can avoid the TempDB growing rapidly?

I do not have a DBA background.

The TempDB for this customer is 40 GB at the moment. All ‘heavy’ tables are loaded incrementally and batch cleansing is enabled for these tables. I have some questions:

  • I know batch cleansing is preventing the logs will grow rapidly, but does it also have impact on the TempDB?
  • Are there any other table settings in TimeXtender can help reducing the TempDB size?
  • Does the number of threads in a execution package have impact on the TempDB?
  • Are there any best practices in determining the correct/optimal disk size for the TempDB?
  • Are there any troubleshooting ideas/tools for this?

Any other suggestions?

@bas.hopstaken yes enabling batch data cleansing on tables can reduce the TempDB, have you tried to reduce the batch size for the tables? As you mention, this is a SQL DBA issue, but here are a few tips to try.

 


Hi @bas.hopstaken ,

 

your TempDB required size will depend on the size of your databases, the types of ETL you do (automated by TX or performed by custom code). Additionally the resources available to SQL Server may push the engine to offload to TempDB for sorting for instance. Any users querying MDW with arbitrary SQL may also be pushing things to TempDB if they are savvy users.

Typically you size TempDB to a size you want, to avoid TempDB losing performance due to resizing along the way. Restarting the SQL engine will reinit TempDB, during operations TempDB will grow as needed to the max size you allow. I would never shrink TempDB as this is, at best, a very temporary solution.

I typically do not have very large TempDBs for TX as long as noone is explicitly pushing temporary tables into TempDB and not cleaning them. #, ##, TempDB.., and @ can all drop data in TempDB growing the size if used in custom syntax. Certain operations TX will automate make use of TempDB and certain query patterns will have their sorting pushed to TempDB so large fact tables may cause heavy TempDB use.

40GB of TempDB is peanuts to me, but my databases will usually be 500GB - 3 TB. Brent Ozar has some articles on the sizing of SQL Server and setting up config, but most articles online are aimed at OLTP workloads and not DWH ones.


Hi @Christian Hauggaard , @rory.smith ,

Thanks for the quick reply's. I did a little test in the DEV environment to see if the batch cleansing size have impact on the TempDB growth. This is how I’ve tested it:

Characteristics table: 80 million records, about 40 GB, 7 supernatural keys, 8 simple transformations (CASE WHEN tranformations).
Characteristics infrastructure: 8 CPU’s, 128 GB RAM, 40 GB TempDB disk

1. Restarted SQL Server Service: TempDB disk is empty.

2. Full load table: Batch cleansing setting 800.000 rows

3. Result: Reload failed after 50 minutes → Error TempDB is full.

3. Check size TempDB: 10 MB free space

4. Restarted SQL Server Service: TempDB disk is empty.

5. Full load table: Batch cleansing setting 300.000 rows

6. Result: Reload succeeded in 54 minutes

7. Check size TempDB: 34 GB free space

So it looks like the batch cleansing size does have impact on the TempDB.

 

 


@bas.hopstaken thanks for confirming the test result! Can you please help us by marking a best answer above, or please let us know if you have follow up questions?


Hi @Christian Hauggaard ,

In this case it is hard to define 1 correct answer. To be honest, the answer I shared fixed the problem. But @rory.smith and you gave some tips as well. 


Reply