September 8, 2009 by Mike Hillwig
Every environment is different, and that means everyone does backups differently. I thought I’d talk about how I do it in my environment.
For most of our key business applications, we have two sets of servers, a primary and secondary. The secondary servers are in a different data center. Fortunately, bandwidth isn’t a problem.
I need to keep my secondary servers current in case the primary data center becomes unavailable. This is the perfect place to implement log shipping or database mirroring. Unfortunately, we also use the secondary servers for testing. That means we do it the old fashioned way, backup the primary and restore to the secondary. I’m not saying it’s right, but it’s the environment I have.
I’m a complete nut for consistency. When I inherited this environment, we had network shares and copy scripts for database backups. No two applications did their backups the same way. One day, I ripped it apart and put it back together. Since all of my database servers use the same domain service account and that service account has administrative rights on all database servers, security isn’t a problem. It also means I can write to the administrative shares on other servers without the need to create shares or users.
First, I wrote a single script that gets used across every primary server. With a few parameter changes, my script works for every system. Basically, I build a list of databases and then back them up to the file system on the local file server. That’s the one that gets sent to long-term storage. Next, I write the backup file to the secondary server.
My script is on my site. I’m really proud of this and it has served us well. As you’ll see in the comments, this works well in our small environment with lots of small databases. This won’t scale well after we double our current size.
Of course, I still back up my transaction logs. I would love to hear some feedback from other DBAs who can help me make this better.
My next step is to write a consistent recovery process.