Tag Archives: t logs

VLFs: Very Large Flatabases (wait, what?)

Not everything is small in Flatland.

Remember that cool old book Flatland? Or maybe you saw the movie? Aside from the mashup title where I’m confounding VLFs and VLDBs this post has really not too much to do with Flatland (sorry). Except that, as you must know by now, I am a square.

I’m not the first square to post about virtual log files in the transaction log, but I’m guessing I won’t be the last either. So, with tons of credit to the SQL Skills crew (links below) where I picked up a good part of this, here’s my take on VLFs …

VLFs are logical management units within transaction log files – when a log file grows, it grows in VLF ‘blocks’, and when a log file truncates or shrinks, these operations also are applied to the file in blocks of VLFs. Additionally, during any database recovery operation (including cluster or mirror failovers, service restarts and database restores), each VLF within a t-log is examined to determine whether there are active transactions to roll forward. As you might imagine, an excessive number of VLFs can result in unnecessary overhead and delays in recovery time. Notice a DB that is taking an extra long time to come online after a service bounce? VLF count is the first things to check out.

Small values for a transaction log file auto-grow setting are the typical reason for too many VLFs. A good rule of thumb is that the auto-grow setting should be configured to generate VLFs of about 512 MB to 1 GB. Of course, the best solution is not to rely on auto-growth and manually size the transaction logs if and when you can. Either way, here is what you need to know to grow and\or set the auto-grow value.

SQL’s internal process to create VLFs follow these guidelines:

  • Growths of less than 64 MB and up to 64 MB generate 4 VLFs
  • Growths larger than 64 MB and up to 1 GB generate 8 VLFs
  • Growths larger than 1 GB generate 16 VLFs

Using these algorithms we can control a log file’s growth to achieve something close to the ideal 512 MB – 1 GB VLF size. In my experience the re-sizing process is usually part of a larger project to reduce VLF counts. The following steps outline the whole effort. This is a simple and effective way to get close to the file size and VLF count you want without getting lost in the minutia or splitting too many hairs.

This makes it easy:

  • Before you shrink the log file take note of the current size in MB, assuming this is the correct size, or make a note whatever the proposed new size should be.
  • Shrink, backup, repeat as necessary to minimize VLFs. This works best in a time of quiet so the log can clear easily. If there are many VLFs, quite a few transaction log backups may be required between each shrink since the shrink clears only the unused VLFs. Keep tabs on things as needed by noting the results of the DBCC Shinkfile.
  • Divide 8000 MB into the original size (or proposed new size), round up, and grow the log back that many times in multiples of 8000 MB (use 8000 MB b\c there is a bug in 4000 MB growth increments ). To explain, since each log file grow over 1 GB creates 16 VLFs, the 8000 MB grow results in approximate ideal VLF size of 512 MB.
  • For very large transaction log files – say over 240 GB – the growth statements can be issued in multiples of 16000 MB to create approximately 1 GB VLFs.
  • For smaller size logs –say less than 4 GB – a single growth statement is usually fine. Don’t pet the sweaty stuff.

Note, SQL 2014 has new rules around VLFs and file growths. The steps listed above still apply. It is a cool improvement. Check it out here:

Want to dig in? More VLF SQL Skills here :

Thank you for reading and happy shrinking\growing!