Transaction log size has increased faster than the designated threshold rate.
Impact : High
AimBetter metric uses the predicted log file size as base-line from which to measure. If uncontrolled, log files can consume all available disk space, leading to a system crash. As well, as the log gets larger, the potential for more downtime when recovering from a disaster increases. If the entire database is destroyed, restoring from backups involves recreating the data and log files. The larger the log file is, the longer this takes.
Further impact may be seen from the effect on page file usage, as logs grow in memory and reduce overall available space, page life expectancy may fall substantially.
There is no standard metric for log growth. This alert indicates unexpected behavior and needs to be investigated.
Failure of backup (especially in Full Recovery mode) will result in continual log growth, and has a direct effect on overall database size. See our knowledge base entry on backups here.
Review your backup schedule, run any missing backups.
Properly size your log file and disk capacity to account for a worst case scenario (e.g maintenance or known large operations).
Optimize the application code – avoid starting a transaction in your application that is having long conversations with SQL Server, or risks leaving one open too long. If doing operations on large numbers of rows, consider batching them up into more manageable chunks and giving the log time to recover.
See our knowledgebase entry regarding long queries here.
Any process that has become blocked holds up the flow of commitment to the disk, which forces retention of data in the log.
See our knowledge base entry on blocks here.
This metric is intended to identify any case where the transaction log file has grown beyond planned size. The threshold sets a level beyond which log size triggers a warning. Log file size is highly dependent on level of activity, on database backup being performed correctly and frequently, as well as mode of SQL database recovery.
All user activity that modifies, deletes or creates data is logged. High levels of user activity, especially in the case of long-running transactions, can cause log files to grow exponentially.
In the case of SQL 2012, Data Manipulation Language (DML) actions such as Bulk Data Insert can create massive log entries in a relatively short time. Actions that affect the tempdb size include SELECT statements which use temporary working space to complete the query. Long-running transactions might also fill up tempdb’s transaction log. Tempdb is used implicitly by user transactions for internal objects such as work tables for sorting, work files for hashing, cursor work tables, and row versioning. Even if the user transaction includes only reading data (SELECT queries), internal objects may be created and used under user transactions resulting in the tempdb transaction log being filled.
In case of long-running transactions where SQL is using this transaction log for recovery purposes as described above, then it has to see back to the start of a transaction. Where a transaction takes a long time or does a lot of changes, the log cannot truncate on checkpoint for any of the changes that are still in open transactions or that have started since that transaction started. Example : A delete of multiple rows in one statement is one transaction and the log cannot do any truncating until that whole delete is done. In Full Recovery mode, this delete is logged and that could be a very large volume of log records. Poor transaction management and not watching for and closing open transactions can expand the log file very quickly.