Sunday, March 25, 2012

Copying the backup to another drive

I have set up our new SS2k install to use the Bulk-Logged backup method,
backup up TRN's every night and the whole DB every Sunday using the Agent and
a fairly out-of-the-box Maint plan.
What I would like to do is copy the latest file to another machine on our
network after every backup run. That machine is attached to an offsight SAN.
I tried simply pointing the backup directory at that machine, but people
noted this is a bad idea and it's not all that easy to do anyway.
I think the easy way to do this is to edit the backup jobs and add a copy.
However I don't see a way to name the file to copy, which will change every
day. Also whenever I've made changes like this in the past, the maint plan no
longer works, yet this is very important for our install (so other people can
edit it).
Any advice?"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:70154059-0A56-41A1-B07B-9DCD47C884BD@.microsoft.com...
>I have set up our new SS2k install to use the Bulk-Logged backup method,
> backup up TRN's every night and the whole DB every Sunday using the Agent
> and
> a fairly out-of-the-box Maint plan.
> What I would like to do is copy the latest file to another machine on our
> network after every backup run. That machine is attached to an offsight
> SAN.
> I tried simply pointing the backup directory at that machine, but people
> noted this is a bad idea and it's not all that easy to do anyway.
> I think the easy way to do this is to edit the backup jobs and add a copy.
> However I don't see a way to name the file to copy, which will change
> every
> day. Also whenever I've made changes like this in the past, the maint plan
> no
> longer works, yet this is very important for our install (so other people
> can
> edit it).
> Any advice?
The most easily maintainable method (in my opinion of your given scenario)
is to create a DTS job that will perform the backups and do the copying
process. The ActiveX scripting (VBScript) in DTS is conducive to making up
filenames, etc. and is fairly straightforward.
Then you simple schedule a job to fire off the DTS package.
HTH
Rick Sawtell
MCT, MCSD, MCDBA|||Actually, backing up to a remote file share is pretty easy and is IMHO, a
best practice.
HowTo: Backup to UNC name using Database Maintenance Wizard
http://support.microsoft.com/?kbid=555128
--
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Maury Markowitz" <MauryMarkowitz@.discussions.microsoft.com> wrote in
message news:70154059-0A56-41A1-B07B-9DCD47C884BD@.microsoft.com...
> I have set up our new SS2k install to use the Bulk-Logged backup method,
> backup up TRN's every night and the whole DB every Sunday using the Agent
and
> a fairly out-of-the-box Maint plan.
> What I would like to do is copy the latest file to another machine on our
> network after every backup run. That machine is attached to an offsight
SAN.
> I tried simply pointing the backup directory at that machine, but people
> noted this is a bad idea and it's not all that easy to do anyway.
> I think the easy way to do this is to edit the backup jobs and add a copy.
> However I don't see a way to name the file to copy, which will change
every
> day. Also whenever I've made changes like this in the past, the maint plan
no
> longer works, yet this is very important for our install (so other people
can
> edit it).
> Any advice?|||Geoff N. Hiten wrote:
> Actually, backing up to a remote file share is pretty easy and is
> IMHO, a best practice.
> HowTo: Backup to UNC name using Database Maintenance Wizard
> http://support.microsoft.com/?kbid=555128
>
I don't agree. You are completely throttling the backup based on your
network bandwidth. For small databases this may be fine, but everything
else will likely suffer in terms of backup speed. Given that a native
backup writes out a backup file the size of the database, using a
network share will likely slow it down. That applies as well to backup
products like LiteSpeed which thrive on available disk bandwidth. Plus,
do you really want to flood the network pipe on the server with backup
data which could also cause network performance problems for all
connected users.
My experience has been to back up locally (SAN, NAS, local drive) and
then move off to the final location (tape, network share, etc.).
--
David Gugick
Imceda Software
www.imceda.com|||I did include a warning in the article about possible network saturation.
In practice I use a dedicated backup network and compression software to
limit impact on production systems. Yes, it is more expensive in terms of
disk storage and network capacity, however I believe it is the best approach
for database recoverability and backup management. I combine the backup to
disk woth a tape archive rotation to give me some backup history. Since I
am truly paranoid about my backups, I even have a separate restore folder so
my tape process can continue uninterrupted eve if I have to restore from
tape.
--
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:%23reYGbk1EHA.3468@.TK2MSFTNGP14.phx.gbl...
> Geoff N. Hiten wrote:
> > Actually, backing up to a remote file share is pretty easy and is
> > IMHO, a best practice.
> >
> > HowTo: Backup to UNC name using Database Maintenance Wizard
> > http://support.microsoft.com/?kbid=555128
> >
> I don't agree. You are completely throttling the backup based on your
> network bandwidth. For small databases this may be fine, but everything
> else will likely suffer in terms of backup speed. Given that a native
> backup writes out a backup file the size of the database, using a
> network share will likely slow it down. That applies as well to backup
> products like LiteSpeed which thrive on available disk bandwidth. Plus,
> do you really want to flood the network pipe on the server with backup
> data which could also cause network performance problems for all
> connected users.
> My experience has been to back up locally (SAN, NAS, local drive) and
> then move off to the final location (tape, network share, etc.).
> --
> David Gugick
> Imceda Software
> www.imceda.com
>|||Geoff N. Hiten wrote:
> I did include a warning in the article about possible network
> saturation.
> In practice I use a dedicated backup network and compression software
> to limit impact on production systems. Yes, it is more expensive in
> terms of disk storage and network capacity, however I believe it is
> the best approach for database recoverability and backup management.
> I combine the backup to disk woth a tape archive rotation to give me
> some backup history. Since I am truly paranoid about my backups, I
> even have a separate restore folder so my tape process can continue
> uninterrupted eve if I have to restore from tape.
>
I agree. For safety and faster recovery, it's good to have some backups
available on the network separate from the server drives as well as on
tape stored off-site. Compression is key here. But if you're using a
backup product that adds compression, like LiteSpeed, you'll still have
better performance backing up locally and then copying the files over
the network after the backup is complete. You could use LiteSpeed to
dump to a UNC, but you will throttle backup speed a lot and many
companies are very interested in keeping backup times to a minimum.
If you were not network or time constrained, you could back up directly
to a UNC, but I would still argue that using native backup in this
design would be silly because of the large backup sizes.
--
David Gugick
Imceda Software
www.imceda.com|||"David Gugick" wrote:
> I agree. For safety and faster recovery, it's good to have some backups
> available on the network separate from the server drives as well as on
> tape stored off-site.
This is my basic goal. I want to have the file write out to disk, then ZIP
it, copy the ZIPed file to the SAN'ed drive, and then finally delete the ZIP.
The result would be an uncompressed original TRN on the DB's drive, which
I'll let SQL Server periodically delete.
So then back to the original question. When I add commands to the backup
steps it seems to make the Maint Plan stop working. Yet it is fairly
important to me to allow that to continue working so future admins can manage
the system easily. I actually find it very surprising the Agent doesn't have
Compress and Copy To options, given how painfully obvious these feature seem
to me.
So is there any way to do this? I believe the SQL Server account is logged
into a non-domain account, which might complicate things. But even having the
backup compress after saving would be useful, and then I could "copy from"
the SAN machine.|||I do use LiteSpeed. It definitely helps backup performance and space
consumption. I find a properly configured UNC share can digest a backup
much faster than all but the very fastest multi-drive tape libraries and
certainly close to a local drive under most conditions. Of course,
'properly configured' to me means a RAID 1+0 UltraSCSI-320 array with 146GB
10KRPM drives and a Ultra320 RAID controller and enclosure. With this
setup, I can run backups during a small overnight window and spin them to
tape during the day. I also use a dedicated backup network to separate the
load. Finally, since I am running a cluster, attaching the tape drive
directly to the host unit is not an option. SAN backup options were also
not useful since I needed to keep multiple LUNS in synch and I didn't want
to restore an entire LUN to retrieve a single database file. All the
benefits of a multi-stage disk/tape backup at about half of the cost.
--
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:O96IJxl1EHA.164@.TK2MSFTNGP10.phx.gbl...
> Geoff N. Hiten wrote:
> > I did include a warning in the article about possible network
> > saturation.
> > In practice I use a dedicated backup network and compression software
> > to limit impact on production systems. Yes, it is more expensive in
> > terms of disk storage and network capacity, however I believe it is
> > the best approach for database recoverability and backup management.
> > I combine the backup to disk woth a tape archive rotation to give me
> > some backup history. Since I am truly paranoid about my backups, I
> > even have a separate restore folder so my tape process can continue
> > uninterrupted eve if I have to restore from tape.
> >
> >
> I agree. For safety and faster recovery, it's good to have some backups
> available on the network separate from the server drives as well as on
> tape stored off-site. Compression is key here. But if you're using a
> backup product that adds compression, like LiteSpeed, you'll still have
> better performance backing up locally and then copying the files over
> the network after the backup is complete. You could use LiteSpeed to
> dump to a UNC, but you will throttle backup speed a lot and many
> companies are very interested in keeping backup times to a minimum.
> If you were not network or time constrained, you could back up directly
> to a UNC, but I would still argue that using native backup in this
> design would be silly because of the large backup sizes.
> --
> David Gugick
> Imceda Software
> www.imceda.com
>

No comments:

Post a Comment