Azure SQL Database Log Backup Storage Size grows enormously resulting in high costs

  • 20 March 2024
  • 5 replies

Userlevel 2

Recently we identified that our Azure SQL Database Backup Costs for our TimeXtender environment are very high, almost even higher then the regular storage costs itself. We now try to minimize these Backup Costs. We already minimized the Backup Retention policies:

Now, when analysing some Backup data in the Azure Portal we found that the Log Backup Storage Size is growing the the multiple TeraBytes, even though our PROD database uses around 28GB.



Does anybody know how we can further analyse and minimize these backup costs and data? I found the following regarding the transaction log, but I can't seem to dive in the transaction log/size per table or execution for example.



In Azure SQL Database, transaction logs record changes made to the database to ensure data durability and provide for features like point-in-time restore and disaster recovery. Various operations within Azure SQL Database can trigger changes that are logged in the transaction log. Here are some common operations that trigger transaction log entries:

  1. Data Modifications: Any INSERT, UPDATE, DELETE, or MERGE operation on tables or indexed views triggers changes that are logged in the transaction log.

  2. Schema Modifications: Changes to the database schema, such as creating or dropping tables, altering table structures, adding or removing columns, or modifying constraints, are logged in the transaction log.

  3. Index Operations: Creating, dropping, rebuilding, or rebuilding online indexes also trigger changes that are logged in the transaction log.

  4. Transaction Control Statements: BEGIN TRANSACTION, COMMIT TRANSACTION, ROLLBACK TRANSACTION, SAVE TRANSACTION, and other transaction control statements are logged in the transaction log to maintain transactional consistency.

  5. Data Definition Language (DDL) Statements: DDL statements such as CREATE, ALTER, and DROP for database objects (tables, views, procedures, functions, etc.) are logged in the transaction log.

  6. Data Manipulation Language (DML) Statements: DML statements such as SELECT INTO, TRUNCATE TABLE, and others can also trigger transaction log entries.

  7. System Operations: Various system operations and administrative tasks, such as database backups, log backups, database restores, and database copies, trigger changes that are logged in the transaction log.


Best answer by rory.smith 20 March 2024, 11:46

View original

5 replies

Userlevel 6
Badge +7


the backup cost depends on your retention settings and the type of Azure SQL you are using. Every table that is fully executed will incur delta's for the full raw and valid tables. It is quite normal for the backup storage to be larger than the actual data if you have longer than a few days of retention if you don't do a lot of incremental loading. The logic of storage buildup is described here:

You need to decide how long a window of time you need to detect issues and how long you need to restore to determine your backup retentions. There may be situations where your database performance drops and a restore fixes it, so having old enough backups is useful to limit downtime. Obviously non-production databases are different.

As you don't get file-level access, there are some things you simply cannot do on a PaaS db compared to running your own SQL.  One important thing to check is the redundancy of your backups, by default they are geo-redundant which is the most expensive and may also be a problem for GDPR. Locally redundant will have 3 copies inside the same physical location and be cheaper.

Userlevel 6
Badge +5

Hi @KCMT does Rory’s comment above answer your question? Please let us know if you have any follow up questions

You are not the only one experiencing this problem. 
I just closed a ticket with Azure where I was investigating the price of an Azure SQL Database (a 4 vCores) who PITR Backup Storage costed more than the database itself. 


The answer is:

Total billable backup storage size = (size of full backups + size of differential backups + size of log backups) – maximum data storage



Try to put that into the Azure Calculator and you will have the real price of your database upfront. 


Userlevel 6
Badge +7


the backup costs are simply part of the cost equation. Given that DWHs are usually not business critical, you can usually get away with less stringent backup setups than in an OLTP scenario (which is the main target for Azure PaaS databases excepting Synapse).

If you run database servers on your own infrastructure, you need to take care of that too, and this is typically not trivial in the hardware and skill area. If you do not want backups handled for you, you can run SQL Server directly on a VM and not set up backups or set up the backup retentions to very short periods in Paas databases. Additionally, not changing the data in your database as much (incremental over full load) will reduce delta size and therefore cost. Note that you can also significantly reduce cost by making use of reserved instances if you know you will be using vCores for a year or more.

Userlevel 6
Badge +5

Hi @KCMT do the above comments answer your question? If so please help us by marking a best answer above. Please let us know if you have any follow up questions