Befriending Dragons

Transform Tech with Anti-bullying Cultures


7 Comments

SQLPASS: Are You Smarter Than an MCM? VLF Demos

Are You Smarter Than an MCM?

 I had a great time on Wednesday co-presenting with some great SQL peeps! We dressed up, had a lot of fun, and shared a few technical tips along the way. My demo was on VLFs, an often forgotten and/or misunderstood part of the transaction log technology. The demo files are attached below. Thanks to everyone who came to the talk! #SQLPASS

The Team for DBA-414-M

Our fabulous “Smarter” team is

VLFs

Question: Cindy has ADHD.  In the past she we able to keep up with multiple tasks at once, but lately she has slowed down.  Why? 

Answer: Excessive VLFs

 A VLF is the boundary within your log file(s) for log operations such as scans (replication, recovery, mirroring, CDC, etc.) and log clearing/truncating.

  • 100s of VLFs might be ok, 1000s is probably bad, 10s of thousands is almost certainly bad. Bad means much slower operations for activities that operate within VLFs boundaries.
  • DBCC LOGINFO is undocumented and unsupported but is the only way to see your VLF distribution. It returns one row per VLF.
  • Often you get too many VLFs from autogrow/small growths.

 VLF Lessons Learned:

  • Avoid autogrow by pre-sizing your data and log files appropriately. This includes alerting when your free space starts to decrease significantly and planning for data growth over the lifecycle of the system. Make sure you check your system as well as your user databases.
  • Set autogrow values to reasonably large amounts to avoid a series of small autogrows in an unexpectedly high volume period, but not so big the growths themselves slow down the system.
  • Alert on autogrows and check (and possibly correct) VLFs afterwards.
  • Check your VLFs periodically and during performance troubleshooting and consider shrinking/regrowing properly (with the proper presizing) to reduce the total number of VLFs when the number gets “too high”.
  • Never grow (manually or via autogrow settings) in increments of exactly 4GB. Values such as 4000MB or 8000MB are good, but there is a bug with multiples of exactly 4GB.
  • Do NOT shrink your data or log files on a regular basis. Shrinking should be extremely rare and in response to a specific issue.
  • And because it can’t be said too often, pre-size your databases to avoid autogrow.
  • Revisit the sizes periodically.

 Thanks everyone for coming to the talk and for the great feedback afterwards! You did fill out your evaluation forms, right? 🙂

Pictures:

http://yfrog.com/z/h88chpvj

http://flic.kr/p/awbV6h

VLFDemo.zip


Leave a comment

How People Abuse Their SQL Server Transaction Log – Things NOT to Do

Care and Feeding of the Transaction Log

 

Want to learn more about how the transaction log works? Kalen Delaney, celebrated author of SQL Server 2008 Internals and the Inside SQL Server series as well as a world class trainer and performance tuner,  is giving a pre-conference talk at SQL PASS next week. The pre and post conference sessions are full day sessions you can purchase as an add-on to the regular conference (which runs Tuesday-Thursday). Kalen’s pre-conference session “Care and Feeding of the Transaction Log” is scheduled for Monday, November 2, 2009. She will be raffling off copies of her book and will have copies of her DVD to give away. If you are guilty of any of the transaction log practices below, or have colleagues who are and you need ammunition in your fight for change, Kalen’s session will be a great help to you! Note that she will not necessarily be covering all the items below, they are my own list and not hers. 🙂

How People Abuse Their SQL Server Transaction Log – Things NOT to Do

·         Rely on autogrow – this causes fragmentation, poorly allocated VLFs, a performance hit for duration of grow, etc.

·         Don’t take backups at all or do them on a schedule that doesn’t meet SLAs or doesn’t keep the log “small enough”.

·         Use simple recovery mode in the belief it will improve performance when there are no bulk inserts or something else that actually benefits from minimal logging.

·         Want to “turn off logging”.

·         “Shrink” the log by deleting the LDF file – can result in corruption and an unusable database.

·         Shrink the log file when it will just grow again.

·         Add multiple LDF files due to the mistaken impression that this will improve performance (multiple threads myth or thinking it will split IO over multiple files when logs are mostly sequential). This is particularly a problem with TempDB where we recommend 1/4 to 1 file per core and don’t always make it clear we mean only the data files and not the log file.

·         Only backing up the log files when they reach a certain percentage full to reduce the number of backups occurring at any one time – this can result in not being able to meet your recovery SLAs.

·         Enable instant file initialization and expect it to help with log growth.

·         Create one extremely large log file without regard to # of VLFs created.

·         Sharepoint documents have recommended simple recovery mode to “keep the log from filling” but don’t explain the tradeoffs for recoverability such as losing Point in Time recovery options.

·         Don’t understand the difference between truncating and shrinking the log.

·         Don’t realize that in simple mode you still have to do a full backup first.

·         Don’t realize they need to do a log backup after switching from simple or bulk logged to full.

·         Think DBCC CHECKDB checks for inconsistencies in the log

·         Put data and log on same drive (mix random and mostly sequential), put lots of logs on one drive (defeats purpose of having a log/sequential writes on own IO path), put output of profiler or other activity on same drive as a log

·         Don’t put log on fast enough/properly configured drive (RAID 10, disk partition alignment, separate from file server/other SQL/db data, disk allocation unit = 64k, HBA queue depth 64+, anything that gets writes to 3ms or less, etc.)

·         Ignore corruption messages/event log messages about bad IO

·         Don’t secure the directories where the MDF/LDF and backup files reside

·         Ignore the fact that Tran replication affects log size (can’t truncate until log reader has read data from log)

·         Do many transaction log backups between full backups (with few or no differentials) and don’t consider what happens if one of the files is corrupted/missing or how long it will take to do the restores

·         Don’t test the DR strategy, or at least have it documented and planned/thought out

·         Don’t exclude the MDF/LDF files from virus scanning software

 

Instead of abusing your transaction log, pay attention to Pond’s Twelfth Law: Don’t practice in front of the CIO.  A professional prepares ahead of time. To help you prepare, attend Kalen’s pre-conference session!