better way to backup 50 Gig db?

better way to backup 50 Gig db?

am 19.04.2010 21:15:09 von Mitchell Maltenfort

I'm using MySQL to manage data on my computer .

The total data is 50 Gig in MyISAM folders.

As I type, I already have the folder with the myd, frm, etc being
copied offsite. As I understand it, if this computer dies tomorrow, I
can reinstall MySQL on a new computer, drag over the archive, stick
the folder under "data" and I'm back in business.

Or am I dreaming?

I'd rather be corrected now than find out the hard way.

Any advice?+

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

Re: better way to backup 50 Gig db?

am 20.04.2010 00:24:09 von Shawn Green

Mitchell Maltenfort wrote:
> I'm using MySQL to manage data on my computer .
>
> The total data is 50 Gig in MyISAM folders.
>
> As I type, I already have the folder with the myd, frm, etc being
> copied offsite. As I understand it, if this computer dies tomorrow, I
> can reinstall MySQL on a new computer, drag over the archive, stick
> the folder under "data" and I'm back in business.
>
> Or am I dreaming?
>
> I'd rather be corrected now than find out the hard way.
>
> Any advice?+
>

Did you remember to FLUSH (with read lock) those tables before you
started copying. Alternatively, you could have shutdown your MySQL
instance, too.

If not, then the in-memory and on-disk images of your tables are out of
sync. You may be copying away data that will appear corrupted after
restoration.

--
Shawn Green
MySQL Principle Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

RE: better way to backup 50 Gig db?

am 20.04.2010 02:49:35 von Gavin Towey

What Shawn said is important.

Better options:
1. Use InnoDB, and then you can make a consistent backup with `mysqldump --=
single-transaction > backup.sql` and keep your db server actively respondi=
ng to requests at the same time.

2. Use something like LVM to create filesytem snapshots which allow you to =
backup your database, while only keeping a read lock on the db for a second=
or so.



-----Original Message-----
From: Shawn Green [mailto:shawn.l.green@oracle.com]
Sent: Monday, April 19, 2010 3:24 PM
To: Mitchell Maltenfort
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Mitchell Maltenfort wrote:
> I'm using MySQL to manage data on my computer .
>
> The total data is 50 Gig in MyISAM folders.
>
> As I type, I already have the folder with the myd, frm, etc being
> copied offsite. As I understand it, if this computer dies tomorrow, I
> can reinstall MySQL on a new computer, drag over the archive, stick
> the folder under "data" and I'm back in business.
>
> Or am I dreaming?
>
> I'd rather be corrected now than find out the hard way.
>
> Any advice?+
>

Did you remember to FLUSH (with read lock) those tables before you
started copying. Alternatively, you could have shutdown your MySQL
instance, too.

If not, then the in-memory and on-disk images of your tables are out of
sync. You may be copying away data that will appear corrupted after
restoration.

--
Shawn Green
MySQL Principle Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgtowey@ffn.com


This message contains confidential information and is intended only for the=
individual named. If you are not the named addressee, you are notified th=
at reviewing, disseminating, disclosing, copying or distributing this e-mai=
l is strictly prohibited. Please notify the sender immediately by e-mail i=
f you have received this e-mail by mistake and delete this e-mail from your=
system. E-mail transmission cannot be guaranteed to be secure or error-fre=
e as information could be intercepted, corrupted, lost, destroyed, arrive l=
ate or incomplete, or contain viruses. The sender therefore does not accept=
liability for any loss or damage caused by viruses or errors or omissions =
in the contents of this message, which arise as a result of e-mail transmis=
sion. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089,=
USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg

Re: better way to backup 50 Gig db?

am 20.04.2010 10:06:09 von Jay Ess

Gavin Towey wrote:
> What Shawn said is important.
>
> Better options:
> 1. Use InnoDB, and then you can make a consistent backup with `mysqldump --single-transaction > backup.sql` and keep your db server actively responding to requests at the same time.
>
> 2. Use something like LVM to create filesytem snapshots which allow you to backup your database, while only keeping a read lock on the db for a second or so.
>
3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

RE: better way to backup 50 Gig db?

am 20.04.2010 11:08:20 von andrew.2.moore

I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, =
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel dump/i=
mport.

There's some great features in both products. I will leave you to do your o=
wn research into the tools as knowing their features will benefit you.

Best wishes

Andy
________________________________________
From: ext Jay Ess [lists@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
> What Shawn said is important.
>
> Better options:
> 1. Use InnoDB, and then you can make a consistent backup with `mysqldump =
--single-transaction > backup.sql` and keep your db server actively respon=
ding to requests at the same time.
>
> 2. Use something like LVM to create filesytem snapshots which allow you t=
o backup your database, while only keeping a read lock on the db for a seco=
nd or so.
>
3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dandrew.2.moore@noki=
a.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg

RE: better way to backup 50 Gig db?

am 20.04.2010 20:03:31 von Gavin Towey

More good ideas from Andrew!

Just a note though, I noticed someone added replication to a slave as a bac=
kup option. I really discourage that. Replication makes no guarantees tha=
t the data on your slave is the same as the data on your master. Unless yo=
u're also checking consistency, a slave should be treated as a somewhat unr=
eliable copy of your data.

Regards,
Gavin Towey

-----Original Message-----
From: andrew.2.moore@nokia.com [mailto:andrew.2.moore@nokia.com]
Sent: Tuesday, April 20, 2010 2:08 AM
To: lists@netrogenic.com
Cc: mysql@lists.mysql.com
Subject: RE: better way to backup 50 Gig db?

I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, =
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel dump/i=
mport.

There's some great features in both products. I will leave you to do your o=
wn research into the tools as knowing their features will benefit you.

Best wishes

Andy
________________________________________
From: ext Jay Ess [lists@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
> What Shawn said is important.
>
> Better options:
> 1. Use InnoDB, and then you can make a consistent backup with `mysqldump =
--single-transaction > backup.sql` and keep your db server actively respon=
ding to requests at the same time.
>
> 2. Use something like LVM to create filesytem snapshots which allow you t=
o backup your database, while only keeping a read lock on the db for a seco=
nd or so.
>
3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dandrew.2.moore@noki=
a.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgtowey@ffn.com


This message contains confidential information and is intended only for the=
individual named. If you are not the named addressee, you are notified th=
at reviewing, disseminating, disclosing, copying or distributing this e-mai=
l is strictly prohibited. Please notify the sender immediately by e-mail i=
f you have received this e-mail by mistake and delete this e-mail from your=
system. E-mail transmission cannot be guaranteed to be secure or error-fre=
e as information could be intercepted, corrupted, lost, destroyed, arrive l=
ate or incomplete, or contain viruses. The sender therefore does not accept=
liability for any loss or damage caused by viruses or errors or omissions =
in the contents of this message, which arise as a result of e-mail transmis=
sion. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089,=
USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg

Re: better way to backup 50 Gig db?

am 20.04.2010 22:19:19 von Claudio Nanni - TomTom

--001517447b5ae75a440484b0ca20
Content-Type: text/plain; charset=ISO-8859-1

Where is Falcon!!!! (Sorry)

the only way to have a really consistent binary backup is to shut down the
server.
the best way to shut down a server is to have a slave dedicated to backups
that you can shutdown any time.

if you have only the content of the database folders under [datadir] it is
not enough, you need the full [datadir] to 'dream' to restore
your db, unless you only use MyISAM tables, then you are more lucky.

The bottom line is: Don't Dream, Prove it. Or it will become a nightmare
sooner or later.


Ciao!

Claudio



2010/4/20 Gavin Towey

> More good ideas from Andrew!
>
> Just a note though, I noticed someone added replication to a slave as a
> backup option. I really discourage that. Replication makes no guarantees
> that the data on your slave is the same as the data on your master. Unless
> you're also checking consistency, a slave should be treated as a somewhat
> unreliable copy of your data.
>
> Regards,
> Gavin Towey
>
> -----Original Message-----
> From: andrew.2.moore@nokia.com [mailto:andrew.2.moore@nokia.com]
> Sent: Tuesday, April 20, 2010 2:08 AM
> To: lists@netrogenic.com
> Cc: mysql@lists.mysql.com
> Subject: RE: better way to backup 50 Gig db?
>
> I would also recommend looking into some 3rd party tools.
>
> http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb,
> MyISAM and XtraDB engines.
> http://www.maatkit.org/ - Packed with useful features inc a parallel
> dump/import.
>
> There's some great features in both products. I will leave you to do your
> own research into the tools as knowing their features will benefit you.
>
> Best wishes
>
> Andy
> ________________________________________
> From: ext Jay Ess [lists@netrogenic.com]
> Sent: 20 April 2010 09:06
> Cc: mysql@lists.mysql.com
> Subject: Re: better way to backup 50 Gig db?
>
> Gavin Towey wrote:
> > What Shawn said is important.
> >
> > Better options:
> > 1. Use InnoDB, and then you can make a consistent backup with `mysqldump
> --single-transaction > backup.sql` and keep your db server actively
> responding to requests at the same time.
> >
> > 2. Use something like LVM to create filesytem snapshots which allow you
> to backup your database, while only keeping a read lock on the db for a
> second or so.
> >
> 3. Set up replication and backup the replicated data using any of the
> above method.
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=andrew.2.moore@nokia.com
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?unsub=gtowey@ffn.com
>
>
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee, you are notified that
> reviewing, disseminating, disclosing, copying or distributing this e-mail is
> strictly prohibited. Please notify the sender immediately by e-mail if you
> have received this e-mail by mistake and delete this e-mail from your
> system. E-mail transmission cannot be guaranteed to be secure or error-free
> as information could be intercepted, corrupted, lost, destroyed, arrive late
> or incomplete, or contain viruses. The sender therefore does not accept
> liability for any loss or damage caused by viruses or errors or omissions in
> the contents of this message, which arise as a result of e-mail
> transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA
> 94089, USA, FriendFinder.com
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=claudio.nanni@gmail.com
>
>


--
Claudio

--001517447b5ae75a440484b0ca20--

Re: better way to backup 50 Gig db?

am 20.04.2010 22:20:50 von Rob Wultsch

On Tue, Apr 20, 2010 at 11:03 AM, Gavin Towey wrote:
> More good ideas from Andrew!
>
> Just a note though, I noticed someone added replication to a slave as a b=
ackup option. =A0I really discourage that. =A0Replication makes no guarante=
es that the data on your slave is the same as the data on your master. =A0U=
nless you're also checking consistency, a slave should be treated as a some=
what unreliable copy of your data.
>
> Regards,
> Gavin Towey
>

I would like to second this sentiment. Once you start looking for
data inconsistencies on slaves you will be surprised how often you
find them.

--=20
Rob Wultsch
wultsch@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg

RE: better way to backup 50 Gig db?

am 21.04.2010 01:26:47 von Gavin Towey

--_000_21A67E2153E64D48ACBD190732CB859101558890site1mailbox1 pm_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

You can make binary backups from the master using filesystem snapshots. Yo=
u only need to hold a global read lock for a split second.

Regards,
Gavin Towey

From: Claudio Nanni [mailto:claudio.nanni@gmail.com]
Sent: Tuesday, April 20, 2010 1:19 PM
To: Gavin Towey
Cc: andrew.2.moore@nokia.com; lists@netrogenic.com; mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Where is Falcon!!!! (Sorry)

the only way to have a really consistent binary backup is to shut down the =
server.
the best way to shut down a server is to have a slave dedicated to backups =
that you can shutdown any time.

if you have only the content of the database folders under [datadir] it is =
not enough, you need the full [datadir] to 'dream' to restore
your db, unless you only use MyISAM tables, then you are more lucky.

The bottom line is: Don't Dream, Prove it. Or it will become a nightmare s=
ooner or later.


Ciao!

Claudio



2010/4/20 Gavin Towey >
More good ideas from Andrew!

Just a note though, I noticed someone added replication to a slave as a bac=
kup option. I really discourage that. Replication makes no guarantees tha=
t the data on your slave is the same as the data on your master. Unless yo=
u're also checking consistency, a slave should be treated as a somewhat unr=
eliable copy of your data.

Regards,
Gavin Towey

-----Original Message-----
From: andrew.2.moore@nokia.com [mailto:and=
rew.2.moore@nokia.com]
Sent: Tuesday, April 20, 2010 2:08 AM
To: lists@netrogenic.com
Cc: mysql@lists.mysql.com
Subject: RE: better way to backup 50 Gig db?

I would also recommend looking into some 3rd party tools.

http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, =
MyISAM and XtraDB engines.
http://www.maatkit.org/ - Packed with useful features inc a parallel dump/i=
mport.

There's some great features in both products. I will leave you to do your o=
wn research into the tools as knowing their features will benefit you.

Best wishes

Andy
________________________________________
From: ext Jay Ess [lists@netrogenic.com]
Sent: 20 April 2010 09:06
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Gavin Towey wrote:
> What Shawn said is important.
>
> Better options:
> 1. Use InnoDB, and then you can make a consistent backup with `mysqldump =
--single-transaction > backup.sql` and keep your db server actively respon=
ding to requests at the same time.
>
> 2. Use something like LVM to create filesytem snapshots which allow you t=
o backup your database, while only keeping a read lock on the db for a seco=
nd or so.
>
3. Set up replication and backup the replicated data using any of the
above method.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dandrew.2.moore@noki=
a.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgtowey@ffn.com

This message contains confidential information and is intended only for the=
individual named. If you are not the named addressee, you are notified th=
at reviewing, disseminating, disclosing, copying or distributing this e-mai=
l is strictly prohibited. Please notify the sender immediately by e-mail i=
f you have received this e-mail by mistake and delete this e-mail from your=
system. E-mail transmission cannot be guaranteed to be secure or error-fre=
e as information could be intercepted, corrupted, lost, destroyed, arrive l=
ate or incomplete, or contain viruses. The sender therefore does not accept=
liability for any loss or damage caused by viruses or errors or omissions =
in the contents of this message, which arise as a result of e-mail transmis=
sion. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089,=
USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dclaudio.nanni@gmail=
..com



--
Claudio

________________________________
This message contains confidential information and is intended only for the=
individual named. If you are not the named addressee, you are notified tha=
t reviewing, disseminating, disclosing, copying or distributing this e-mail=
is strictly prohibited. Please notify the sender immediately by e-mail if =
you have received this e-mail by mistake and delete this e-mail from your s=
ystem. E-mail transmission cannot be guaranteed to be secure or error-free =
as information could be intercepted, corrupted, lost, destroyed, arrive lat=
e or incomplete, or contain viruses. The sender therefore does not accept l=
iability for any loss or damage caused by viruses or errors or omissions in=
the contents of this message, which arise as a result of e-mail transmissi=
on. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, U=
SA, FriendFinder.com

--_000_21A67E2153E64D48ACBD190732CB859101558890site1mailbox1 pm_--

Re: better way to backup 50 Gig db?

am 21.04.2010 08:12:11 von Claudio Nanni - TomTom

Gavin,
Right,
that is also an option, but you are really not sure 100% that everything
that is on memory is on the disk (buffers etc...)
also if it is definitely good for a disaster recovery.
What I meant is that the only way to have a 100% guaranteed consistent
binary backup is when the database is shut down.
Of course this is almost never an option, unless (tada) you have a slave
dedicated for that.
One remark on your note:

Just a note though, I noticed someone added replication to a slave as a backup option. I really discourage that. Replication makes no guarantees that the data on your slave is the same as the data on your master. Unless you're also checking consistency, a slave should be treated as a somewhat unreliable copy of your data.

While it is true that replication makes no guarantees, if your slave is
not the same as the master and you rely on that for production, you have
some problems,
try to go to business and say, our slave (which at least 50% of our
applications use to read data) is not really in sync, watch their facial
expression!
Believe me, in many production environments the method used for backups
relies on the slave, not on the master.
It is so much useful and important that you should have all your efforts
go for having a consistent read-only slave 'dedicated' only for backups,
no other client messing with it.

Just my two cents

Claudio


Gavin Towey wrote:

You can make binary backups from the master using filesystem snapshots. You only need to hold a global read lock for a split second.

Regards,
Gavin Towey


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

Re: better way to backup 50 Gig db?

am 21.04.2010 08:50:48 von Johan De Meersman

--0016e643545636d0940484b99d42
Content-Type: text/plain; charset=ISO-8859-1

How does the below not guarantee me 100% that everything that can be
consistent, is ?

mysql> flush tables with read lock;
unixhost# sync
unixhost# lvm create snapshot
mysql> unlock tables;

I agree that there may be data inconsistencies within MyISAM, as it has no
transactions, but there's also no guarantee that there isn't an application
in the middle of multiple insert statements the moment you shut your
database or your application. You can't magically get full consistency out
of your database if your application hasn't been designed for it.

Shutting systems down for backup may be fine for small setups, but on a
server that hosts multiple critical applications, you just can't do that.
You back up as consistently as you can without downtime, and pick the time
of night with the lowest likelyhood of activity for it.


On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni wrote:

> Gavin,
> Right,
> that is also an option, but you are really not sure 100% that everything
> that is on memory is on the disk (buffers etc...)
> also if it is definitely good for a disaster recovery.
> What I meant is that the only way to have a 100% guaranteed consistent
> binary backup is when the database is shut down.
> Of course this is almost never an option, unless (tada) you have a slave
> dedicated for that.
> One remark on your note:
>
>
> Just a note though, I noticed someone added replication to a slave as a
> backup option. I really discourage that. Replication makes no guarantees
> that the data on your slave is the same as the data on your master. Unless
> you're also checking consistency, a slave should be treated as a somewhat
> unreliable copy of your data.
>
> While it is true that replication makes no guarantees, if your slave is not
> the same as the master and you rely on that for production, you have some
> problems,
> try to go to business and say, our slave (which at least 50% of our
> applications use to read data) is not really in sync, watch their facial
> expression!
> Believe me, in many production environments the method used for backups
> relies on the slave, not on the master.
> It is so much useful and important that you should have all your efforts go
> for having a consistent read-only slave 'dedicated' only for backups, no
> other client messing with it.
>
> Just my two cents
>
> Claudio
>
>
>
> Gavin Towey wrote:
>
> You can make binary backups from the master using filesystem snapshots.
> You only need to hold a global read lock for a split second.
>
> Regards,
> Gavin Towey
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?unsub=vegivamp@tuxera.be
>
>


--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

--0016e643545636d0940484b99d42--

Re: better way to backup 50 Gig db?

am 21.04.2010 10:50:35 von Claudio Nanni - TomTom

--0016e6dd89d9a340190484bb493b
Content-Type: text/plain; charset=ISO-8859-1

Johan,
Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
enough? :)
InnoDB has buffers and activities going on even if you locked the tables and
you are not sure that its buffers are on the disk when you snapshot.
Again, you might be lucky and trust in the InnoDB recovery, what I state is
that there is only one 100% guaranteed safe way to have binary backups.

have a look at these, very interesting:
http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_Fil e_System_Snapshots-2009-02-26.pdf
http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for -mysql-backup-and-replication-setup/

Cheers

Claudio



2010/4/21 Johan De Meersman

> How does the below not guarantee me 100% that everything that can be
> consistent, is ?
>
> mysql> flush tables with read lock;
> unixhost# sync
> unixhost# lvm create snapshot
> mysql> unlock tables;
>
> I agree that there may be data inconsistencies within MyISAM, as it has no
> transactions, but there's also no guarantee that there isn't an application
> in the middle of multiple insert statements the moment you shut your
> database or your application. You can't magically get full consistency out
> of your database if your application hasn't been designed for it.
>
> Shutting systems down for backup may be fine for small setups, but on a
> server that hosts multiple critical applications, you just can't do that.
> You back up as consistently as you can without downtime, and pick the time
> of night with the lowest likelyhood of activity for it.
>
>
> On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni wrote:
>
>> Gavin,
>> Right,
>> that is also an option, but you are really not sure 100% that everything
>> that is on memory is on the disk (buffers etc...)
>> also if it is definitely good for a disaster recovery.
>> What I meant is that the only way to have a 100% guaranteed consistent
>> binary backup is when the database is shut down.
>> Of course this is almost never an option, unless (tada) you have a slave
>> dedicated for that.
>> One remark on your note:
>>
>>
>> Just a note though, I noticed someone added replication to a slave as a
>> backup option. I really discourage that. Replication makes no guarantees
>> that the data on your slave is the same as the data on your master. Unless
>> you're also checking consistency, a slave should be treated as a somewhat
>> unreliable copy of your data.
>>
>> While it is true that replication makes no guarantees, if your slave is
>> not the same as the master and you rely on that for production, you have
>> some problems,
>> try to go to business and say, our slave (which at least 50% of our
>> applications use to read data) is not really in sync, watch their facial
>> expression!
>> Believe me, in many production environments the method used for backups
>> relies on the slave, not on the master.
>> It is so much useful and important that you should have all your efforts
>> go for having a consistent read-only slave 'dedicated' only for backups, no
>> other client messing with it.
>>
>> Just my two cents
>>
>> Claudio
>>
>>
>>
>> Gavin Towey wrote:
>>
>> You can make binary backups from the master using filesystem snapshots.
>> You only need to hold a global read lock for a split second.
>>
>> Regards,
>> Gavin Towey
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe: http://lists.mysql.com/mysql?unsub=vegivamp@tuxera.be
>>
>>
>
>
> --
> Bier met grenadyn
> Is als mosterd by den wyn
> Sy die't drinkt, is eene kwezel
> Hy die't drinkt, is ras een ezel
>



--
Claudio

--0016e6dd89d9a340190484bb493b--

Re: better way to backup 50 Gig db?

am 21.04.2010 11:24:58 von Johan De Meersman

--0016367d6fa4989c270484bbc4f3
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Apr 21, 2010 at 10:50 AM, Claudio Nanni wrote:

> Johan,
> Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
> enough? :)
>
> InnoDB has buffers and activities going on even if you locked the tables
> and you are not sure that its buffers are on the disk when you snapshot.
> Again, you might be lucky and trust in the InnoDB recovery, what I state is
> that there is only one 100% guaranteed safe way to have binary backups.
>

Aha, that's a tidbit I didn't know :-) It does hold back queries, but the
engine doesn't stop it's maintenance threads.

I tend to do a full dump with --single-transaction once a week, and copy the
binlogs on the other days, though. That should suffice, I guess ? :-)


--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

--0016367d6fa4989c270484bbc4f3--

Re: better way to backup 50 Gig db?

am 21.04.2010 11:54:58 von Claudio Nanni - TomTom

--00032555f9dadd45950484bc2fd6
Content-Type: text/plain; charset=ISO-8859-1

A Nice SQL dump with table locked are also very good, the problem is you
will lock the database for the time needed to dump it.
If you do it good like:

lock all tables
flush logs
dump
unlock tables

you will know where to start in case of recovery dump+binary logs

Cheers!


Claudio


2010/4/21 Johan De Meersman

>
>
> On Wed, Apr 21, 2010 at 10:50 AM, Claudio Nanni wrote:
>
>> Johan,
>> Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
>> enough? :)
>>
>> InnoDB has buffers and activities going on even if you locked the tables
>> and you are not sure that its buffers are on the disk when you snapshot.
>> Again, you might be lucky and trust in the InnoDB recovery, what I state
>> is that there is only one 100% guaranteed safe way to have binary backups.
>>
>
> Aha, that's a tidbit I didn't know :-) It does hold back queries, but the
> engine doesn't stop it's maintenance threads.
>
> I tend to do a full dump with --single-transaction once a week, and copy
> the binlogs on the other days, though. That should suffice, I guess ? :-)
>
>
> --
> Bier met grenadyn
> Is als mosterd by den wyn
> Sy die't drinkt, is eene kwezel
> Hy die't drinkt, is ras een ezel
>



--
Claudio

--00032555f9dadd45950484bc2fd6--

Re: better way to backup 50 Gig db?

am 21.04.2010 13:57:05 von Claudio Nanni - TomTom

--0016e6dd89d99a1fd80484bde41a
Content-Type: text/plain; charset=ISO-8859-1

Hi Ben,
as said, you have to consider that a database data lives both on disk and on
ram,
on ram you have transactions, buffers that are asyncronously written to
disk, etc.

While the datadir of a 'shutdown' database is the FULL dataset(knowledge)
since no information is in ram,
a datadir of a running database is not the FULL dataset of any database.

This is nice:
http://www.percona.com/ppc2009/PPC2009_Life_of_a_dirty_pageI nnoDB_disk_IO.pdf

Cheers

Claudio

2010/4/21 Ben Mildren

> LVM Snapshots on InnoDB setup are quite common, the transaction logs have
> to be on the same volume, and you have to pay attention to some of the
> configuration options, but I don't see any reason why this isn't a viable
> alternative. Why wouldn't you trust InnoDB recovery? If you can't trust
> that, there's no point in running it at all.. ..I'd hate to be in a
> situation where my server crashed and I couldn't trust the database to come
> back up again. The key with all this is to test your backups..
>
> On 21 April 2010 09:50, Claudio Nanni wrote:
>
>> Johan,
>> Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
>> enough? :)
>> InnoDB has buffers and activities going on even if you locked the tables
>> and
>> you are not sure that its buffers are on the disk when you snapshot.
>> Again, you might be lucky and trust in the InnoDB recovery, what I state
>> is
>> that there is only one 100% guaranteed safe way to have binary backups.
>>
>> have a look at these, very interesting:
>>
>> http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_Fil e_System_Snapshots-2009-02-26.pdf
>>
>> http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for -mysql-backup-and-replication-setup/
>>
>> Cheers
>>
>> Claudio
>>
>>
>>
>> 2010/4/21 Johan De Meersman
>>
>> > How does the below not guarantee me 100% that everything that can be
>> > consistent, is ?
>> >
>> > mysql> flush tables with read lock;
>> > unixhost# sync
>> > unixhost# lvm create snapshot
>> > mysql> unlock tables;
>> >
>> > I agree that there may be data inconsistencies within MyISAM, as it has
>> no
>> > transactions, but there's also no guarantee that there isn't an
>> application
>> > in the middle of multiple insert statements the moment you shut your
>> > database or your application. You can't magically get full consistency
>> out
>> > of your database if your application hasn't been designed for it.
>> >
>> > Shutting systems down for backup may be fine for small setups, but on a
>> > server that hosts multiple critical applications, you just can't do
>> that.
>> > You back up as consistently as you can without downtime, and pick the
>> time
>> > of night with the lowest likelyhood of activity for it.
>> >
>> >
>> > On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni >> >wrote:
>> >
>> >> Gavin,
>> >> Right,
>> >> that is also an option, but you are really not sure 100% that
>> everything
>> >> that is on memory is on the disk (buffers etc...)
>> >> also if it is definitely good for a disaster recovery.
>> >> What I meant is that the only way to have a 100% guaranteed consistent
>> >> binary backup is when the database is shut down.
>> >> Of course this is almost never an option, unless (tada) you have a
>> slave
>> >> dedicated for that.
>> >> One remark on your note:
>> >>
>> >>
>> >> Just a note though, I noticed someone added replication to a slave as a
>> >> backup option. I really discourage that. Replication makes no
>> guarantees
>> >> that the data on your slave is the same as the data on your master.
>> Unless
>> >> you're also checking consistency, a slave should be treated as a
>> somewhat
>> >> unreliable copy of your data.
>> >>
>> >> While it is true that replication makes no guarantees, if your slave is
>> >> not the same as the master and you rely on that for production, you
>> have
>> >> some problems,
>> >> try to go to business and say, our slave (which at least 50% of our
>> >> applications use to read data) is not really in sync, watch their
>> facial
>> >> expression!
>> >> Believe me, in many production environments the method used for backups
>> >> relies on the slave, not on the master.
>> >> It is so much useful and important that you should have all your
>> efforts
>> >> go for having a consistent read-only slave 'dedicated' only for
>> backups, no
>> >> other client messing with it.
>> >>
>> >> Just my two cents
>> >>
>> >> Claudio
>> >>
>> >>
>> >>
>> >> Gavin Towey wrote:
>> >>
>> >> You can make binary backups from the master using filesystem snapshots.
>> >> You only need to hold a global read lock for a split second.
>> >>
>> >> Regards,
>> >> Gavin Towey
>> >>
>> >>
>> >> --
>> >> MySQL General Mailing List
>> >> For list archives: http://lists.mysql.com/mysql
>> >> To unsubscribe:
>> http://lists.mysql.com/mysql?unsub=vegivamp@tuxera.be
>> >>
>> >>
>> >
>> >
>> > --
>> > Bier met grenadyn
>> > Is als mosterd by den wyn
>> > Sy die't drinkt, is eene kwezel
>> > Hy die't drinkt, is ras een ezel
>> >
>>
>>
>>
>> --
>> Claudio
>>
>
>


--
Claudio

--0016e6dd89d99a1fd80484bde41a--

RE: better way to backup 50 Gig db?

am 21.04.2010 16:32:21 von Jerry Schwartz

>-----Original Message-----
>From: Claudio Nanni [mailto:claudio.nanni@gmail.com]
>Sent: Wednesday, April 21, 2010 2:12 AM
>Cc: mysql@lists.mysql.com
>Subject: Re: better way to backup 50 Gig db?
>

[JS]

[JS] Unless I've forgotten something from earlier in my career (what day is
it, anyways?), there are three aspects to this problem:

1. Ensuring that your databases, slave and master individually, are internally
consistent;
2. Ensuring that your master has captured the latest externally-supplied data;
and
3. Ensuring that your slave and you master are totally in synch.

#1 is the proper goal for the master. That's the whole point of ACID. For the
master database, #2 is unattainable. You can buffer as many times and as many
ways and as many places as you like, there is always going to be the
**possibility** that some incoming data will be lost. Even if you push the
problem all the way back to a human user, it will still be possible to lose
data. If something is possible, it will happen: perhaps not for millennia, but
more likely as soon as you leave on vacation.

Similarly, #1 is an attainable and necessary goal for a slave; and #2 is just
as unattainable for a slave as for a master. The only way to guarantee #3 is
to include the replication somewhere in the ACID transaction. The penalty for
that is going to be a loss of throughput, possibly a horrendous loss of
throughput. That is where somebody needs to do a cost/benefit analysis.

>Just my two cents
>
[JS] ... and mine ...

>Claudio
>
>
>Gavin Towey wrote:
>
>You can make binary backups from the master using filesystem snapshots. You
>only need to hold a global read lock for a split second.
>
>Regards,
>Gavin Towey
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

RE: better way to backup 50 Gig db?

am 21.04.2010 19:26:33 von Gavin Towey

Claudio,

So innodb may not be consistent between memory and disk at all times, but t=
hat's actually not important. What is important is that the files on disk =
to be consistent with a specific binlog position. That's all that is neede=
d for a consistent backup, and can be done with filesystem snapshots.

Innodb may continue to do background flushing even during a FLUSH TABLES WI=
TH READ LOCK, but it always keeps consistency between its log files, the bi=
nlog, and the tablespaces.

When you load your snapshot back into an instance of mysql, you'll often se=
e it go through the crash recovery as it applies log file items to the tabl=
espace, and deals with any unfinished open transactions that happened to be=
running, but once done your data will be consistent with the binlog positi=
on that was recorded when the read lock was held.

I do this every day on many servers both for backup, and creating new slave=
s. It always works. Neither of those links you gave contradict this, in f=
act they both essentially say "this works great, as long as you're aware of=
the caveats"

Regards,
Gavin Towey


-----Original Message-----
From: Claudio Nanni [mailto:claudio.nanni@gmail.com]
Sent: Wednesday, April 21, 2010 1:51 AM
To: Johan De Meersman
Cc: mysql@lists.mysql.com
Subject: Re: better way to backup 50 Gig db?

Johan,
Is the fact that InnoDB ignores the command FLUSH TABLES WITH READ LOCK;
enough? :)
InnoDB has buffers and activities going on even if you locked the tables an=
d
you are not sure that its buffers are on the disk when you snapshot.
Again, you might be lucky and trust in the InnoDB recovery, what I state is
that there is only one 100% guaranteed safe way to have binary backups.

have a look at these, very interesting:
http://forge.mysql.com/w/images/c/c1/MySQL_Backups_using_Fil e_System_Snapsh=
ots-2009-02-26.pdf
http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for -mysql-backup-a=
nd-replication-setup/

Cheers

Claudio



2010/4/21 Johan De Meersman

> How does the below not guarantee me 100% that everything that can be
> consistent, is ?
>
> mysql> flush tables with read lock;
> unixhost# sync
> unixhost# lvm create snapshot
> mysql> unlock tables;
>
> I agree that there may be data inconsistencies within MyISAM, as it has n=
o
> transactions, but there's also no guarantee that there isn't an applicati=
on
> in the middle of multiple insert statements the moment you shut your
> database or your application. You can't magically get full consistency ou=
t
> of your database if your application hasn't been designed for it.
>
> Shutting systems down for backup may be fine for small setups, but on a
> server that hosts multiple critical applications, you just can't do that.
> You back up as consistently as you can without downtime, and pick the tim=
e
> of night with the lowest likelyhood of activity for it.
>
>
> On Wed, Apr 21, 2010 at 8:12 AM, Claudio Nanni w=
rote:
>
>> Gavin,
>> Right,
>> that is also an option, but you are really not sure 100% that everything
>> that is on memory is on the disk (buffers etc...)
>> also if it is definitely good for a disaster recovery.
>> What I meant is that the only way to have a 100% guaranteed consistent
>> binary backup is when the database is shut down.
>> Of course this is almost never an option, unless (tada) you have a slave
>> dedicated for that.
>> One remark on your note:
>>
>>
>> Just a note though, I noticed someone added replication to a slave as a
>> backup option. I really discourage that. Replication makes no guarante=
es
>> that the data on your slave is the same as the data on your master. Unl=
ess
>> you're also checking consistency, a slave should be treated as a somewha=
t
>> unreliable copy of your data.
>>
>> While it is true that replication makes no guarantees, if your slave is
>> not the same as the master and you rely on that for production, you have
>> some problems,
>> try to go to business and say, our slave (which at least 50% of our
>> applications use to read data) is not really in sync, watch their facial
>> expression!
>> Believe me, in many production environments the method used for backups
>> relies on the slave, not on the master.
>> It is so much useful and important that you should have all your efforts
>> go for having a consistent read-only slave 'dedicated' only for backups,=
no
>> other client messing with it.
>>
>> Just my two cents
>>
>> Claudio
>>
>>
>>
>> Gavin Towey wrote:
>>
>> You can make binary backups from the master using filesystem snapshots.
>> You only need to hold a global read lock for a split second.
>>
>> Regards,
>> Gavin Towey
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dvegivamp@tuxera.=
be
>>
>>
>
>
> --
> Bier met grenadyn
> Is als mosterd by den wyn
> Sy die't drinkt, is eene kwezel
> Hy die't drinkt, is ras een ezel
>



--
Claudio

This message contains confidential information and is intended only for the=
individual named. If you are not the named addressee, you are notified th=
at reviewing, disseminating, disclosing, copying or distributing this e-mai=
l is strictly prohibited. Please notify the sender immediately by e-mail i=
f you have received this e-mail by mistake and delete this e-mail from your=
system. E-mail transmission cannot be guaranteed to be secure or error-fre=
e as information could be intercepted, corrupted, lost, destroyed, arrive l=
ate or incomplete, or contain viruses. The sender therefore does not accept=
liability for any loss or damage caused by viruses or errors or omissions =
in the contents of this message, which arise as a result of e-mail transmis=
sion. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089,=
USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg