Fast & Reliable Cloud Backups

For MySQL, MongoDB, Linux, Unix

Get Started

Dec 12, 2014

How to Use SQL Server Transaction Logs to Improve Your DB's Performance

Categories tutorial

Posted by Gen

feature photo

TL;DR: Every change to the objects contained in your database is recorded in SQL Server's transaction logs. That makes the logs the logical first stop when troubleshooting a problem with your database. Unfortunately, the transaction logs themselves can sometimes cause problems of there own. Here's how to put the logs to best use.

SQL Server's transaction logs can be a fountain of information about your databases. Unfortunately, the logs can sometimes be the source of the performance trouble their information is intended to help identify and prevent.

The Microsoft Developer Network's documentation for the SQL Server 2014 Transaction Log lists five primary purposes for the logs:

  • Recovery of individual transactions
  • Recovery of all incomplete transactions when SQL Server is started
  • Rolling a restored database, file, filegroup, or page forward to the point of failure
  • Supporting transactional replication
  • Supporting high availability and disaster recovery solutions: AlwaysOn Availability Groups, database mirroring, and log shipping

The documentation also points out a common source of problems related to transaction logs: truncation. You have to truncate the logs to prevent them from filling available memory. Truncation involves deleting inactive virtual log files to free up space for the physical transaction log.

The transaction log is truncated automatically after a checkpoint in the simple recovery model, and under the full recovery model (or bulk-logged recovery model) after a log backup, if there has been a checkpoint since the most recent backup (copy-only log backups are the exception). The MSDN document explains the factors that can delay an automatic log truncation.

How to use fn_dblog to analyze your transaction log

There's a wealth of system information in the SQL Server transaction log, but accessing and interpreting it can be a challenge. In a March 10, 2014, post, Remus Rusanu describes how to put the fn_dblog function to use to glean useful information from the log. (Note that fn_dblog was formerly known as the DBCC command, as Thomas LaRock explains on the SolarWinds LogicalRead site.)

The write-ahead logging (WAL) protocol ensures that any change to data stored in the database is recorded somewhere in the transaction log. This includes minimally logged, bulk logged, and so-called non-logged operations such as TRUNCATE.

Rusanu provides the example of a log entry for three concurrent transactions: one with two insert and one delete; one with an insert but rolled back, so there's no corresponding delete operation; and one with two deletes and one insert. The log sequence number (LSN) determines the order in the log of the "concurrent" operations.


This SQL Server transaction log shows interleaved operations from multiple simultaneous transactions. Source: Remus Rusanu

As shown in the above example, the [Transaction ID] column holds the system degenerate transaction ID for each logged operation. Start your analysis at the LOP_BEGIN_XACT operations, which indicate the date and time of the transaction, the user SID, and other useful information. Rusanu provides a detailed examination of a single transaction as well as an in-depth look at a log after truncation.

A transaction log that just keeps growing can leave you scratching your head. A DBA Stack Exchange post from December 2012 presents a typical situation: a transaction log that refuses to truncate. The most likely causes are a long-running transaction, such as index maintenance, or not changing the default "Full" recovery mode and going a long time between backups.

One of the proposed solutions is to use the [sys.databases][5] catalog view and look in the log_reuse_wait column with a lookup ID of the reason code, as well as a log_reuse_wait_desc column with a description of the wait reason.

Whether you're diagnosing a misbehaving database or restoring one after a catastrophic failure, the BitCan cloud storage service has your back. BitCan's easy-to-use interface lets you set up and schedule your database backups within minutes. The service supports MySQL and MongoDB databases as well as Unix/Linux systems and files.

BitCan encrypts your data at the communication and storage layers, and your backups are stored permanently on Amazon S3 servers. Visit the BitCan site to create a free 30-day trial account.

Fast & Reliable Cloud Backups

For MySQL, MongoDB, Linux, Unix

Get Started

Categories tutorial