Are You Smarter Than an MCM?
I had a great time on Wednesday co-presenting with some great SQL peeps! We dressed up, had a lot of fun, and shared a few technical tips along the way. My demo was on VLFs, an often forgotten and/or misunderstood part of the transaction log technology. The demo files are attached below. Thanks to everyone who came to the talk! #SQLPASS
The Team for DBA-414-M
Our fabulous “Smarter” team is
- Our fearless mustachioed leader and Principal PM Lara (Foxworthy) Rubbelke
- SQL MCM Program Manager Robert (Robby) Davis
- Fellow PFE Pam (Pammy) Lahoud
- SQLCAT lab manager Jimmy (Buckaroo) May
- SQL DSE Cindy (Cinister) Gross
VLFs
Question: Cindy has ADHD. In the past she we able to keep up with multiple tasks at once, but lately she has slowed down. Why?
Answer: Excessive VLFs
A VLF is the boundary within your log file(s) for log operations such as scans (replication, recovery, mirroring, CDC, etc.) and log clearing/truncating.
- 100s of VLFs might be ok, 1000s is probably bad, 10s of thousands is almost certainly bad. Bad means much slower operations for activities that operate within VLFs boundaries.
- DBCC LOGINFO is undocumented and unsupported but is the only way to see your VLF distribution. It returns one row per VLF.
- Often you get too many VLFs from autogrow/small growths.
VLF Lessons Learned:
- Avoid autogrow by pre-sizing your data and log files appropriately. This includes alerting when your free space starts to decrease significantly and planning for data growth over the lifecycle of the system. Make sure you check your system as well as your user databases.
- Set autogrow values to reasonably large amounts to avoid a series of small autogrows in an unexpectedly high volume period, but not so big the growths themselves slow down the system.
- Alert on autogrows and check (and possibly correct) VLFs afterwards.
- Check your VLFs periodically and during performance troubleshooting and consider shrinking/regrowing properly (with the proper presizing) to reduce the total number of VLFs when the number gets “too high”.
- Never grow (manually or via autogrow settings) in increments of exactly 4GB. Values such as 4000MB or 8000MB are good, but there is a bug with multiples of exactly 4GB.
- Do NOT shrink your data or log files on a regular basis. Shrinking should be extremely rare and in response to a specific issue.
- And because it can’t be said too often, pre-size your databases to avoid autogrow.
- Revisit the sizes periodically.
Thanks everyone for coming to the talk and for the great feedback afterwards! You did fill out your evaluation forms, right? 🙂
Pictures:
Robert Davis's demos are here: http://www.sqlsoldier.com/…/summit2011sessionfiles
As Kalen Delaney pointed out during my demo and Pedro proved per the below blog, in SQL Server 2012 you will see a warning when there are more than 1000 VLFs in any database during recovery.
The “rumor” is true: SQL Server 2012 has a new high VLFs warning.
blogs.msdn.com/…/the-rumor-is-true-sql-server-2012-has-a-new-high-vlfs-warning.aspx
Correct me if I'm wrong, but as far as I know we must defferentiate OLAP and OLTP workloads when talong to VLFs, right?
talong to VLFs actually means talking about VLFs :-)) sorry for that
VLFs are boundaries within a database log. OLAP vs. OLTP doesn't matter directly.
During the session you mentioned that you will be posting script that shows how to fix excessive VLFs. Do you still plan to do it?
I have a db, that doesn't shrink down to less than ~175 VLFs. Do you know of a work around? I was thinking adding a second LDF, sizing it, and then getting rid of the bad one. What do you think?
Mark – there's a VLFDemo.zip at the bottom of the post. In there you will find VLFShrinkFile which has sample steps for shrinking and properly regrowing your log file with a "good" number of VLFs. If there are users or batch jobs (anything writing to the log) on the system you may have trouble shrinking it completely.