running a duplicate database
am 09.09.2011 11:09:18 von Dave Dyer
Is there a halfway house between a single database and a full master-slave setup?
I have a database with one "piggish" table, and I'd like to direct queries that search the pig to a duplicate database, where it won't affect all the routine traffic.
I could definitely do this by setting up a slave server, but for my purposes it would be just as effective, and lots easier, if mysql could automatically maintain a duplicate of the database. Presumably this would work internally like an internal auto-slave, with a binary log of changes to the master database self-consumed to maintain the duplicate.
As a bonus, I could backup the duplicate instead of the master, so that won't affect the routine traffic either.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
Re: running a duplicate database
am 09.09.2011 11:18:09 von Reindl Harald
--------------enig99FE7C76D1C47011D5B98D59
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Am 09.09.2011 11:09, schrieb Dave Dyer:
> Is there a halfway house between a single database and a full master-sl=
ave setup?
>
> I have a database with one "piggish" table, and I'd like to direct quer=
ies that search the pig to a duplicate database, where it won't affect al=
l the routine traffic. =20
>
> I could definitely do this by setting up a slave server, but for my pur=
poses it would be just as effective, and lots easier, if mysql could auto=
matically maintain a duplicate of the database. Presumably this would wo=
rk internally like an internal auto-slave, with a binary log of changes t=
o the master database self-consumed to maintain the duplicate.
>
> As a bonus, I could backup the duplicate instead of the master, so that=
won't affect the routine traffic either.
you can run as many slaves on the same machine as you want by
using a different port for all instances and stop/backup/start
one of them per script - doing this since years
it makes no sense "maintain a duplicate of the database"for backups
becasue mysqld have to be stooped for effective rsync-backups which
are much faster as dumps and here are we again at the point using
a slave on a different port
--------------enig99FE7C76D1C47011D5B98D59
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk5p2dEACgkQhmBjz394Anln1QCfQ3S1+gLDg9+WxzGCUUDI piuJ
oEsAn23JihnbseEW+eA4WPv2HXVDV2pM
=IGFZ
-----END PGP SIGNATURE-----
--------------enig99FE7C76D1C47011D5B98D59--
Re: running a duplicate database
am 09.09.2011 12:02:15 von Rik Wasmus
> Am 09.09.2011 11:09, schrieb Dave Dyer:
> > Is there a halfway house between a single database and a full
> > master-slave setup?
> >
> > I have a database with one "piggish" table, and I'd like to direct
> > queries that search the pig to a duplicate database, where it won't
> > affect all the routine traffic.
> >
> > I could definitely do this by setting up a slave server, but for my
> > purposes it would be just as effective, and lots easier, if mysql could
> > automatically maintain a duplicate of the database. Presumably this
> > would work internally like an internal auto-slave, with a binary log of
> > changes to the master database self-consumed to maintain the duplicate.
> >
> > As a bonus, I could backup the duplicate instead of the master, so that
> > won't affect the routine traffic either.
>
> you can run as many slaves on the same machine as you want by
> using a different port for all instances and stop/backup/start
> one of them per script - doing this since years
No problem indeed.
> it makes no sense "maintain a duplicate of the database"for backups
> becasue mysqld have to be stooped for effective rsync-backups which
> are much faster as dumps and here are we again at the point using
> a slave on a different port
I concur that dumps are not an effective way of backup, they take ages when
any decent size database. We have had great experiences with Percona's
Xtrabackup (http://www.percona.com/docs/wiki/percona-xtrabackup:start) for
hotcopies, which also work with InnoDB. But on a heavily used db-server, it
DOES make sense to run the backup on a (unused) slave, there's still some
overhead & locking involved, and if your DB is running hot 24/7 you don't want
that one to do anything that can be done somewhere else.
--
Rik Wasmus
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org