Monday, April 4, 2011

Keeping a Minimum Number of SQL Server Backups Online using a SQL Agent Job


Keeping a Minimum Number of SQL Server Backups Online using a SQL Agent Job

First, a plug: there's a great "maintenance solution" that is a collection of stored procedures available at http://ola.hallengren.com/. This is the basis for a lot of my database maintenance jobs. If you haven’t taken a look at this, I highly recommend that you do; it’s free, under active maintenance, robust, and easy to implement.
When you run the script from that site, you get a variety of stored procedures and you can have it create SQL Agent jobs for you, as well. I’d recommend doing that, as it can give you a good idea of how the stored procedures work.

The backup job

When we perform a backup in a SQL Server instance, we want to perform a number of tasks:
  1. Delete old backups
  2. Backup all of the current databases
  3. Zip up the backup files
  4. Copy the zipped backup file(s) to a share on a backup server
SQL Server (using xp_delete_file) provides a pretty simple method for deleting files that are older than a certain retention window (time-based retention). But time-based retention presents a problem: what if your backups haven’t been running for awhile? The next time your backup job runs, the older backup files (which are all of them) get deleted. Worse: what if there was something wrong with the most recent backup? Suddenly you don’t have any backups online anymore.
I prefer a retention policy based on redundancy: I want to keep at least n copies online at all times, regardless of how old they are. It’s true that a combination of the two policies would be the best-case scenario: keep five days’ worth of backups online, and make sure that we never have fewer than five backup files available at any given time. This would allow us to, for instance, have five backups run in a single day without deleting the older backups that we also want to have available.
But I’ve gone for the simpler route in this case: I want five backup files online at all times. SQL Server doesn’t give us an easy built-in way to do this, so we’ll turn to PowerShell for our process.

Delete old backups

Our first step in the job is to delete the backups that aren't used anymore. In subsequent steps, we have turned off the archive bit on the files we do not need anymore, so we simply delete the files that don't have the archive bit set. The PowerShell script is below:

$backup_dir="path:\to\Backup\dir"
$files=get-childitem -path $backup_dir
# we'll delete all files that don't have the archive bit set

if ( $backup -ne $null ) # only do the delete if the backup dir exists
{
Foreach($file in $files)
{ If((Get-ItemProperty -Path $file.fullname).attributes -band [io.fileattributes]::archive)
{ Write-output "$file is set to be retained" }
ELSE {
Write-output "$file does not have the archive bit set. Deleting."
remove-item -recurse $file.fullname
$output =$_.ErrorDetails } }
#end Foreach
} #End If 
Note that the $BACKUP_DIR variable needs to be set to the correct directory for the backups.
Any file or directory in the backup directory that does not have the archive bit set will be removed. Do pay attention to this fact. You can put a file mask in the get-childitem cmdlet call to modify that behavior, if you choose.

Run DatabaseBackup

DatabaseBackup is the name of the stored procedure (in the master DB) that backs up each of the databases on the instance. It is installed as a part of the maintenance solution referenced at the beginning of the page. Usage is as follows:
EXECUTE [dbo].[DatabaseBackup]
@Databases = 'USER_DATABASES',
@Directory = @backup_dir,
@BackupType = 'FULL',
@Verify = 'Y',
@CleanupTime = 24,
@CheckSum = 'Y'
@Databases can be one of:
  • 'USER_DATABASES' backs up all user databases
  • 'SYSTEM_DATABASES' backs up all system databases (master, model, msdb)
@BackupType can be one of
  • 'FULL' performs a full backup of the database data files
  • 'LOG' backs up the transaction log files
  • 'DIFF' creates a differential backup from the last full backup
So here’s what our next step looks like. It’s a T-SQL step:
-- change the backup directory/drive appropriately
declare @backup_dir varchar(100) ='path:\to\backup\dir'
EXECUTE [dbo].[DatabaseBackup]
@Databases = 'SYSTEM_DATABASES',
@Directory = @backup_dir,
@BackupType = 'FULL',
@Verify = 'Y'

EXECUTE [dbo].[DatabaseBackup]
@Databases = 'USER_DATABASES',
@Directory = @backup_dir,
@BackupType = 'FULL',
@Verify = 'Y'

EXECUTE [dbo].[DatabaseBackup]
@Databases = 'USER_DATABASES',
@Directory = @backup_dir,
@BackupType = 'LOG',
@Verify = 'Y'

The verify switch is quite nice: after each backup, it runs a 'RESTORE VERIFYONLY FROM DISK=..." to ensure that each file is recoverable.

Note:  the DatabaseBackup stored procedure appends \computername to the backup directory by default, but if you're using a named instance, it uses \computername$instancename, instead.  This makes sense, because you can have multiple instances on a server.  If you're using a named instance, you'll need to account for this in the scripts below.
Much thanks to Porro for pointing this out in the comments below!

Basically, we’re backing up all of the system databases, all of the user databases, and then the t-log files from all of the user databases, and we’re saving those backups to a directory structure at the @backup_dir variable location we specified at the beginning.
Note that this stored procedure puts its files in directory structure starting in the @backup_dir. The start of this structure is the server name, with directories under it for each database.

Zip the backup files

We don’t want to keep the uncompressed backups online all the time, so we’ll compress them using 7-Zip.
This PowerShell script is more complicated, so we'll go through it in some more detail. Note that this needs the 7zip executable (and associated .dll). This script will look for it in the c:\utils directory. Note that you can copy just the 7z.exe and 7z.dll to a directory; you don’t have to install the entire package in order to use the 7-Zip command line.
First, here's the whole of our PowerShell script:
$backup_dir="path:\to\backup\dir"
$day= get-date -format "yyyyMMdd_HHmm"
# Turn on the archive bit on the current backups directory
# (so it won't get deleted at the next run if the zip process fails)
attrib $backup_dir\$env:computername +a

# Zip up the current backup(s)
# destination for the zip file is $backup_dir\SQLBACKUP-<servername>-DATE_TIME.zip
C:\utils\7z.exe -tzip -mx1 a $backup_dir\SQLBACKUP-$env:computername-$day.zip $backup_dir\$env:computername

# if 7zip succeeded, we'll continue
if ($LASTEXITCODE -gt 0)
{Throw "7Zip failed" }
ELSE {
# When the zip is complete, turn off the archive bit on the current backup directory
attrib $backup_dir\$env:computername -a

# Now let's change the archive bit, such that only
# the last five zipped backups will be kept online
$delfiles=0
$delfiles= (dir $backup_dir\SQLBACKUP*.zip).count-5
if ($delfiles -gt 0)

# If there are more than 5 zipped backups, we'll turn off the archive bit on them
{dir $backup_dir\SQLBACKUP* | sort-object -property {$_.CreationTime} |
select-object -first $delfiles |
foreach-object { attrib $_.FULLNAME -A} }}
So. The first line sets the backup directory to use. Next, we set a variable to hold today's date and time to use in creating the zip file.
Next we turn on the archive bit for the directory created during the previous step, and then we run 7z.exe to create the zip file. All pretty straightforward up to this point, though do note that we’re using the –mx1 switch in 7Zip. This is important because 7Zip is optimized for compression, not for speed. Using the –mx1 switch tells 7zip to use its fastest (and least CPU-intensive) compression routines. Especially for large files, this is really important.
Our next step is to check to make sure that 7zip succeeded. We do that with the $LASTEXITCODE variable:
if ($LASTEXITCODE -gt 0) {Throw "7Zip failed" }
This says, if 7Zip failed (returning an error code that is greater than zero), end (throw) with failure text "7Zip failed".
If the exit code is zero, then we know 7Zip succeeded, and we'll continue.
The next step is to turn off the archive bit on the directory we just zipped up; that way it'll be deleted when the job runs next.
We also want to keep the five most recent backups online on the server. We don't want to just delete files that are older than five days, though: if the backup was failing, and there aren't backups from days 1-4, we'd suddenly have lost all of our backups. So we loop through the files and sort them by date. Then, if there are more than five files in the directory, we take the oldest files #6 - n and turn off the archive bit on them. That way, those files will be deleted the next time the job runs.
This is the code that does this (thanks, BTW, to Spiceworks for the example script on which this is based.):
$delfiles=0 $delfiles= (dir $backup_dir\SQLBACKUP*.zip).count-5
if ($delfiles -gt 0) # If there are more than 5 zipped backups, we'll turn off the archive bit on them
{dir $backup_dir\SQLBACKUP* | sort-object -property {$_.CreationTime} |
select-object -first $delfiles |
foreach-object { attrib $_.FULLNAME -A} }
What this does is the following:
  1. Count the number of .zip files in the backup directory
  2. If the number of files in the backup directory is > 5, then:
  3. Sort the directory (SQLBACKUP*) by creation time (oldest first)
  4. Take the first n files in the sorted list (where n is the number of files that are greater than 5) and turn off the archive attribute on them.

Copy files to the backup server

Finally we'll copy the files to the backup server:
# Make sure you change the backup directory appropriately
$backup_dir= "path:\to\backup\dir"
$day= get-date -format "yyyyMMdd_"
# This will copy all of today's backups to the backup server
copy-item $backup_dir\SQLBACKUP-$env:computername-$day*.zip \\server\sharename -force
Note that the SQL Agent service account needs to have access to the share in order for this to succeed. Note, too, that this job will copy all files from today, so if there were multiple runs today, all of those files will get copied again (overwritten; that’s the need for the –force switch).
So now we have a backup job that will keep the backups around not based on age but on the number of copies. When you put the scripts above together in a job, the steps look something like this:
image

7 comments:

  1. This is just what I was looking for ... almost. I mean, geez, I was hoping for a quick pointer to some hidden setting in SQL Server that would do this. But, barring that, this will have to do! Thanks for the very clear description.

    ReplyDelete
  2. Thanks for the scripts. Works great for default instances but needs tweaking when running against named instances. The backup step creates a file of name "computersname$instancename" but the other steps look for files with the name "computername".

    ReplyDelete
    Replies
    1. Thanks a ton for that heads-up, porro. I've updated the post above to include that information. I really appreciate your taking the time to point that out.

      Delete
  3. Thanks for posting this, i found it very useful. The Delete old backups step fails if there are no files in the backup folder. you can check for this by wrapping the for-each loop with a check:

    if ( $backup -ne $null )
    {
    ...
    }

    ReplyDelete
    Replies
    1. Excellent point, and you're right. I've updated the script with that improvement.

      Delete

Thanks for leaving a comment!