RSS Feed

What the VLF?

37

May 10, 2013 by Mike Hillwig

When I interviewed for my current position, the hiring manager asked me if I knew anything about VLFs. My response was “Very large filesystem?” Clearly I missed that question. She was talking about SQL Server’s Virtual Log Files.

Until that day, I had never heard of VLFs, and to this day, a lot of DBAs don’t know what they are. But before I get too far into this, let me state that I’m hardly the expert on this. Kimberly Tripp from SQL Skills  has written several really good articles on transaction log throughput where she talks about VLFs.

VLFs are a construct within the SQL Server transaction log file that breaks the transaction log file into chunks. Those are the segements that get marked for being in use or ready to be reused.

The bottom line is that if you have lots of really small VLFs in your log file, you probably have a performance problem and you may not even know it. The best way to avoid this is to avoid the default settings on a new database for the transaction log file size and growth. If you start with a 1 MB transaction log file that grows by 10%, you’re going to have lots of little VLFs before you know what hit you.

My suggestion for a good practice is to start with a larger transaction log file and grow in larger increments. In my environment, for our datamart environments, we start with an 8 GB transaction log file and grow them in 8 GB increments. For smaller databases with smaller throughput, we set a minimum of a 256 MB transaction log file with a growth of 256 MB.

In this session, I talk about how this can be a Goldilocks problem. You might have too many or too few. Or they might be too big or too small. The real way to know is to baseline and test for your environment. The one thing I would suggest is that you don’t want to grow your transaction log file by more then 8 GB at a time. This will lead to 16 VLFs created per growth. And that would mean VLFs greater than 512 MB.

The real value in this session is the demo. The first script creates a database and starts stuffing it with data. It’s absolutely not a model of good design. In fact, it’s far from it. I have multiple columns that contain GUIDs. Worse yet, I have those columns indexed. The reason I’m doing this is because I want to create a ton of transaction log volume. Kick this thing off and let it run. It’ll grow your transaction log to about 12 GB.

The second script will show the VLFs in your database. This script executes [SQL]DBCC LOGINFO WITH TABLERESULTS[/SQL] Notice that as the first script is running, the number of VLFs will continue to grow on each execution.

vlf1

There are some cool details on this output, including the size of each VLF and the LSN of when it was created. Lets do a little analysis on it. What I’m doing is pumping that data into a temp tble and then grouping on the LSN. I can then see how much the log file was grown each time and the number of VLFs it created. What you see can be quite staggering.

[SQL]CREATE TABLE #VLFS
(recoveryunit int, –remove this column for SQL 2005/2008 environments
fileid int,
filesize bigint,
startoffset bigint,
fseqno bigint,
status int,
parity int,
createlsn varchar(1000))

use miketest
go
INSERT INTO #VLFS EXEC(‘DBCC LOGINFO WITH TABLERESULTS’)

select createlsn, sum(cast(filesize as float))/1024/1024 [size], count(startoffset) [count]
from #vlfs
where createlsn <> ‘0’
group by createlsn
order by 3, 2, 1

drop table #vlfs
go
use master
go[/SQL]

vlf2

Look at all of those growths. And they’re tiny. That can’t be good. You can see that we’re we’re growing our transaction log file by less than 1 MB several times.  But look at what happens closer to the end of the output. We’re growing our transaction log file in much larger increments. And the bigger the growth, the more VLFs.

vlf3

Here is what I’ve been able to prove. If your growth is less than 512 KB,  you create one VLF. from 512 KB to 768 KB, you create two VLFs. 768 KB to 1 MB creates three VLFs. 1 MB to 64 MB creates four VLFs. Until you reach 1 GB, it’s eight VLFs. Anything over 1 GB is 16 VLFs.

If you poke around your environment, you probably have databases that look just like this. The question is How do I fix it?  The answer is simple. It depends. If you’re not seeing a noticable performance impact, you may want to leave it alone. This is where If it isn’t broken, don’t fix it could apply. If you are facing a performance problem from too many VLFs, I’d suggest doing this  during a quiet period for your database.

First, note how big your transaction log file is. Because I know how my files grow, I’m going to put it back to the size it was before.  Then I use DBCC SHRINKFILE to shrink the transaction log file and then grow it back to the original size, making sure I only grow it in 8 GB increments.

The real lesson here is something that I talk about in other posts: Defy the Defaults! If you’re creating databases with 1 MB transaction files and allowing them to grow in 10% increments, you’re probably doing it wrong.


Search

Pages