Care and Feeding of the Transaction Log
Want to learn more about how the transaction log works? Kalen Delaney, celebrated author of SQL Server 2008 Internals and the Inside SQL Server series as well as a world class trainer and performance tuner, is giving a pre-conference talk at SQL PASS next week. The pre and post conference sessions are full day sessions you can purchase as an add-on to the regular conference (which runs Tuesday-Thursday). Kalen’s pre-conference session “Care and Feeding of the Transaction Log” is scheduled for Monday, November 2, 2009. She will be raffling off copies of her book and will have copies of her DVD to give away. If you are guilty of any of the transaction log practices below, or have colleagues who are and you need ammunition in your fight for change, Kalen’s session will be a great help to you! Note that she will not necessarily be covering all the items below, they are my own list and not hers. 🙂
How People Abuse Their SQL Server Transaction Log – Things NOT to Do
· Rely on autogrow – this causes fragmentation, poorly allocated VLFs, a performance hit for duration of grow, etc.
· Don’t take backups at all or do them on a schedule that doesn’t meet SLAs or doesn’t keep the log “small enough”.
· Use simple recovery mode in the belief it will improve performance when there are no bulk inserts or something else that actually benefits from minimal logging.
· Want to “turn off logging”.
· “Shrink” the log by deleting the LDF file – can result in corruption and an unusable database.
· Shrink the log file when it will just grow again.
· Add multiple LDF files due to the mistaken impression that this will improve performance (multiple threads myth or thinking it will split IO over multiple files when logs are mostly sequential). This is particularly a problem with TempDB where we recommend 1/4 to 1 file per core and don’t always make it clear we mean only the data files and not the log file.
· Only backing up the log files when they reach a certain percentage full to reduce the number of backups occurring at any one time – this can result in not being able to meet your recovery SLAs.
· Enable instant file initialization and expect it to help with log growth.
· Create one extremely large log file without regard to # of VLFs created.
· Sharepoint documents have recommended simple recovery mode to “keep the log from filling” but don’t explain the tradeoffs for recoverability such as losing Point in Time recovery options.
· Don’t understand the difference between truncating and shrinking the log.
· Don’t realize that in simple mode you still have to do a full backup first.
· Don’t realize they need to do a log backup after switching from simple or bulk logged to full.
· Think DBCC CHECKDB checks for inconsistencies in the log
· Put data and log on same drive (mix random and mostly sequential), put lots of logs on one drive (defeats purpose of having a log/sequential writes on own IO path), put output of profiler or other activity on same drive as a log
· Don’t put log on fast enough/properly configured drive (RAID 10, disk partition alignment, separate from file server/other SQL/db data, disk allocation unit = 64k, HBA queue depth 64+, anything that gets writes to 3ms or less, etc.)
· Ignore corruption messages/event log messages about bad IO
· Don’t secure the directories where the MDF/LDF and backup files reside
· Ignore the fact that Tran replication affects log size (can’t truncate until log reader has read data from log)
· Do many transaction log backups between full backups (with few or no differentials) and don’t consider what happens if one of the files is corrupted/missing or how long it will take to do the restores
· Don’t test the DR strategy, or at least have it documented and planned/thought out
· Don’t exclude the MDF/LDF files from virus scanning software
Instead of abusing your transaction log, pay attention to Pond’s Twelfth Law: Don’t practice in front of the CIO. A professional prepares ahead of time. To help you prepare, attend Kalen’s pre-conference session!