The Trouble With Transaction Logs

This is how you look when you find out how many VLFs your log files have.
This is how you look when you find out how many VLFs your log files have.

You’re transaction log files are a mess.

You don’t think so, but it’s true. Too many times I have seen transaction logs treated without care; they are placed on the same disk drives as the data files or they are placed on drives not optimized for write activity. I have even seen admins who fail to back them up.

If you are one to care for your transaction log files you have likely addressed each of those issues I listed above. But those are all external issues. I bet you have never thought about the internal fragmentation of those files.

A transaction log file is actually made up of small chunks that are called virtual log files, or VLFs. These are the units that are needed to allow for log reuse. If VLFs did not exist then the transaction log would always be growing. Because of VLFs the transaction log is able to clear portions of itself to allow for new transactions to be written instead of always needing to grow.

You can examine your VLFs by using the following command:

DBCC LOGINFO;

That will show you the VLFs for your transaction log for the database you are currently using. You can go here and read more about VLFs from me if you are curious to learn more about how to gather the details for all your databases. That’s not why I am writing this post today.

Here’s why: Defaults.

The default setting for the model database is for a 1MB transaction log to autogrow by ten percent (until it hits two terabytes). So what would that mean for VLFs for users (*COUGH* vendor applications *COUGH*) that use the default values? Let’s find out together, shall we?

The first thing I need to do is to get some details on how many VLFs will be created when you are creating or extending a transaction log file. I’m not able to get those details from MSDN, which I find surprising. Thankfully I know where to get MSDN information when MSDN doesn’t have it: from Kimberly. That’s where I get the first piece of information, which I have put into this table for you:

Initial or Extended Size Number of VLFs created
<64MB 4
>=64MB and < 1GB 8
>=1GB 16

OK, so we know how many VLFs are created based upon the growth size, we know our growth size (10 percent), and we know our starting point (1MB). I can put together some code that will tell me the maximum number of VLFs for a log file that is set to grow by defaults:

 declare @VLFcnt int,@curr float,
 @FV float, @PV float,
 @i float, @n int;

select @VLFcnt = 4, --number of VLFs
 @curr = 1, --current value
 @FV = 1, --future value
 @PV = 1, --present value in MB, you can change to be your current log size
 @i = .1, --your current default growth percentage
 @n = 1; --number of iterations
WHILE @FV < 2000000 --that's 2TB, the limit for log file size
 BEGIN
  SELECT @FV/1024 as [logSizeGB], @VLFcnt as [VLF count]
  SET @curr = @FV --want to capture the present value for each iteration
  SELECT @FV = power(@pv*(1+@i),@n) --calc the new value
  IF @FV - @curr < 64.0
   BEGIN
     SET @VLFcnt = @VLFcnt + 4
   END
  IF @FV - @curr >= 64.0 AND @FV - @curr < 1024.0
   BEGIN
    SET @VLFcnt = @VLFcnt + 8
   END
  IF @FV - @curr >= 1024.0
   BEGIN
    SET @VLFcnt = @VLFcnt + 16
   END
  SET @n = @n + 1 --increment @n
 END

The results I got was that the default settings for autogrowth would result in 1388 VLFs being created.

I happened to recall Paul Randal (blog | @PaulRandal) did a survey a while back that shows how people have more than 1400 total VLFs in their log files, so that got me thinking a bit more. How would one arrive at having a transaction log with more than 1400 VLFs?

The easiest explanation is that users are not taking the defaults. That’s normally a good thing, mind you, but the choices they are making must not be as good! In my example I am using the default growth of 10 percent. I know many a DBA that set the default growth for their logs to be a fixed size, something like 1, 2, or 4 GB. Each growth event would then add 16 VLFs. If I was to grow my log file in 1GB chunks then after 100GB of growth I would have an additional 1600 VLFs.

What if my growth size was set to something silly, like 1MB? That same 100Gb of log file growth would result in an additional 1024 x 100 x 4 =  409,600 VLFs (that is the number of megabytes in a gigabyte, times 100 for the number of gigabytes, times 4 for the number of VLFs for each 1MB of growth).

I would say that 409,600 VLFs is likely going to be an issue for you. I won’t even bother with a “it depends” and share details of disk speed. I’m just going to cut to the chase and tell you that if you have hundreds of thousands of VLFs you should take steps to clean them up.

There is no right number of VLFs for your log file as it depends on the size of your log and the growth setting. I like to use a VLF size of 512MB as my guideline, as that is a decent size to work with (if the VLF is too large I may never have that portion of the log be inactive long enough to be cleared, so I want it to be small, but not too small otherwise I am heavily fragmented). That means if I have a 100GB transaction log, I could get 200 VLFs at 512MB each, and therefore if I were to see a number larger than 200 VLFs for that 100GB log file I would take a deeper dive to see about correcting the issue. But you could have a 128MB size transaction log with about 500 VLFs too, which would be sub-optimal, so this really is one of those “it depends” scenario. You’ll know it’s an issue when you see it.

What could cause this growth anyway? The loading of data in a rather large batch size for a database in full recovery mode, that’s what. Something I have seen countless times: end users and developers pushing a button without understand the implications of their actions and then blaming the DBA for when things fail. It would be similar to someone showing up in a grocery store with a five pound sack, trying to load ten pounds of sugar, and then yelling at the grocery clerk for allowing them to shop for sugar in the first place.

Having too many VLFs is going to cause an issue for anything that needs to use the transaction log. Besides normal log activity, that would mean things like transaction replication, mirroring, CDC (Change Data Capture), redo/undo…yeah, all that stuff will be impacted. If you are using any one of those technologies you should be monitoring your log files for the number of VLFs.

In case you were wondering, since the largest size of a transaction log file is 2TB, and the smallest possible VLF size is 256KB, that gives us the theoretical maximum of VLFs in a single transaction log file to be about 8,388,608.

I wonder if DBCC LOGINFO would even be able to return that many rows!

15 thoughts on “The Trouble With Transaction Logs”

  1. err…
    Default setting for model is not 1 MB, check that out again.
    After that if you are free, two questions here
    1. Did you checked if TLogs can go beyond 2 TB?
    2. Do you think this is always true?Initial or Extended SizeNumber of VLFs created<64MB4
    Let me know your thoughts on these…

    Reply
    • Yusuf,

      I have a new SQL2012 install, and model is set to 1MB log file with a 10 percent autogrow. What are your settings?

      I did not check if the log can go beyond 2TB, as I don’t have access to any drive that size (or larger) so I thought it was safe to trust what MSDN says is the limit for a single log file.

      Yes, I believe that is you extend your log file by less than 64MB is will create 4 VLFs of equal size.

      Tom

      Reply
  2. Top notch! I’ve been preaching about this problem for a while and so far its fell on deaf ears. Thanks a mil!

    Reply
  3. I saw Kimberly’s post on this a few years back, thanks for bringing it up again though as it needs to be addressed. My first pass on my datawarehouse resulted in some VERY fragmented log files. Upwards of 8000 VLFs. I’ve got into the habit of my monthly reviews to run a script that looks for logs with more than 30 VLFs and then shrink and regrow them back to the proper size. I still don’t know why MS leaves a percentage growth rate, especially when they KNOW it causes issues.

    Reply
  4. I just noticed one more thing… the code when it was added to the website change ‘>’ to ‘>’ and ‘,’ to'<‘ . Change those and you will be good

    Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.