Filtered Import From MySQL to MySQL
Filtered Import From MySQL to MySQL
am 31.07.2006 17:23:50 von Randy
Folks:
We have a web-based app that's _really_ slowing down because multiple
clients are writing their own private data into a single, central
database. I guess the previous programmer did things this way because it
made things easy. Well, I'm the person that has to put up with
the long-term headache.
Anywho, someone at work wants things sped up, so what I'm looking
at doing is this for each client:
* Create a new, separate database for each client.
* Copy/Import their private data over this new database.
* Make necessary changes on the front end so each client
can access this new database.
* Also looking into running MySQL Cluster for failover,
etc. (This is an oil biz app, and we provide services
24, 7, 365...
So, here's the problem: what I want to do is a filtered
import from the central database to the new client database,
filtering out all other data except for the individual client's
data. I guess I could write some SQL code to do this, but I'm
figuring that this is going to take a while. What I'm hoping
for is that there is a program out there that will take care of
this for me. BTW, the data will be going from one MySQL database
to another MySQL database. Any help would be greatly appreciated!
TIA,
Randy
Re: Filtered Import From MySQL to MySQL
am 31.07.2006 20:22:51 von Bill Karwin
Randy wrote:
> So, here's the problem: what I want to do is a filtered
> import from the central database to the new client database,
> filtering out all other data except for the individual client's
> data.
What would such a program do, besides write a SQL query to do it? ;-)
You can use "databasename.tablename" specifiers in MySQL queries. So
you can select from the central database and insert into the
client-specific database in one statement, as long as both databases
reside on the same server instance.
For example:
CREATE TABLE client1database.tablename
LIKE centraldatabase.tablename;
This creates a table in the "client1database" of identical structure to
the table in the "centaldatabase". But I don't think it creates foreign
key constraints correctly, it just creates them as indexed columns. You
may have to re-declare foreign keys yourself using ALTER TABLE. Or
maybe it does work if you recreate the referenced tables first.
The copy data from the central database to the destination database:
INSERT INTO client1database.tablename
(col1, col2, col3, etc.)
SELECT col1, col2, col3, etc.
FROM centraldatabase.tablename
WHERE colx = 'client1';
I'm assuming "colx = 'client1" is some hypothetical expression you can
use to identify data that belongs to the individual client. After you
have moved the client's data and verified that it is present in the
client-specific database, you can delete it from the central database:
DELETE FROM centraldatabase.tablename
WHERE colx = 'client1';
After you have deleted all the clients' data from the central database,
be sure to do an OPTIMIZE TABLE tablename, which will defragment and
reclaim the space used by deleted rows, and rebuild index structures to
be efficient for the new, reduced set of data.
Note that if you use InnoDB and you want to reduce the physical size of
the tablespace file (typically "/ibdata1"), deleting data and
OPTIMIZE TABLE won't do that. InnoDB will mark the space as free, and
use it for new data in the future, growing the file again only after the
free space has been filled. It shouldn't hurt anything to have a
tablespace file with lots of free space.
But if you want to shrink the tablespace file immediately, you must do
the following:
1. Back up all InnoDB tables with mysqldump
2. Stop the MySQL Server
3. Physically remove the InnoDB tablespace file
4. Restart the MySQL server
5. Restore the backed-up tables by running the dump files through the
mysql client.
If you run a 24/7 operation, taking the MySQL server down might not be
practical. So you'll have to live with a big, partially-empty
tablespace file for a while. That shouldn't hurt anything, though some
people will poke their head into the database and shout, "Hello! Echo!"
to hear their voice come back at them. ;-)
But do make sure to do the OPTIMIZE TABLE on each table, which will make
the tables more efficient.
Regards,
Bill K.
Re: Filtered Import From MySQL to MySQL
am 31.07.2006 22:33:49 von Randy
Hi, Bill, and thanks for the reply.
Bill Karwin wrote:
> Randy wrote:
>> So, here's the problem: what I want to do is a filtered
>> import from the central database to the new client database,
>> filtering out all other data except for the individual client's
>> data.
>
> What would such a program do, besides write a SQL query to do it? ;-)
Yeah, I kinda' figured that :^). OTOH, I was hoping that there was
a GUI tool out there that just lets you click on and select what you
wanted, generating the SQL on the fly, so to speak :^). An in-house
program we use (called NaviCat) is a MySQL admin tool, and it does
offer various switches when dealing with imports/exports; however,
the program skipped class when it came to being able to filter on
import/export. (Either that, or I haven't found it in the manual yet.)
If I had to guess, there is a program out there that does this. OTOH,
it prolly cost an arm and a leg. Throughout the years, I've noticed
that the "Okay. Here's the simple GUI tool to do this because you're
frickin' lazy" programs are _expensive_! :^). BTW, "the powers that
be" here like GUI tools, so that's what I'm trying to find, thus the
original question.
> You can use "databasename.tablename" specifiers in MySQL queries. So
> you can select from the central database and insert into the
> client-specific database in one statement, as long as both databases
> reside on the same server instance.
That's the way things are set up, Bill. Kewlness.
> For example:
>
> CREATE TABLE client1database.tablename
> LIKE centraldatabase.tablename;
>
> This creates a table in the "client1database" of identical structure to
> the table in the "centaldatabase". But I don't think it creates foreign
> key constraints correctly, it just creates them as indexed columns. You
> may have to re-declare foreign keys yourself using ALTER TABLE. Or
> maybe it does work if you recreate the referenced tables first.
Okay. I'll give this a shot and see what gives.
> The copy data from the central database to the destination database:
>
> INSERT INTO client1database.tablename
> (col1, col2, col3, etc.)
> SELECT col1, col2, col3, etc.
> FROM centraldatabase.tablename
> WHERE colx = 'client1';
Well, the actual name of that particular key would be company_id :^).
> I'm assuming "colx = 'client1" is some hypothetical expression you can
> use to identify data that belongs to the individual client. After you
> have moved the client's data and verified that it is present in the
> client-specific database, you can delete it from the central database:
>
> DELETE FROM centraldatabase.tablename
> WHERE colx = 'client1';
Okay, but I'll make sure I do a backup before excuting that particular
line of code. Centraldatabase is _ALL_ of our customers data,
unfortunately. It is backed up, tho! :^)
> After you have deleted all the clients' data from the central database,
> be sure to do an OPTIMIZE TABLE tablename, which will defragment and
> reclaim the space used by deleted rows, and rebuild index structures to
> be efficient for the new, reduced set of data.
Actually, I read the MySQL on-line docs, and I did an OPTIMIZE TABLE
tablename, but the rows are so numerous in all the tables that queries
were still taking a minute or two to complete. And this is when I came
up with the "single database per client and let MySQL handle it"
solution :^).
Actually, this model would seem to be perfect, because:
1) It would be _MUCH_ faster. :^)
2) Easy to replicate off site.
3) Easy to transfer a client database to another server.
4) Make clients data more readily available, especially if a client
program decides to corrupt the database, for some unknown, screw-up
reason :^). That is, if a client program corrupts their database,
they only corrupt _their_ data -- not everyone else's data. All other
clients should cruise on as if nothing happened, one would think.
5) Much quicker on import/export via dumpfile, etc.
6) Easier to backup.
7) Muck easier to transfer over to a cluster, "five nines" system.
Hey, it's not quite MySQL Cluster, but it's a start :^).
> Note that if you use InnoDB and you want to reduce the physical size of
> the tablespace file (typically "/ibdata1"), deleting data and
> OPTIMIZE TABLE won't do that. InnoDB will mark the space as free, and
> use it for new data in the future, growing the file again only after the
> free space has been filled. It shouldn't hurt anything to have a
> tablespace file with lots of free space.
I'm not familiar with InnoDB, Bill. BTW, I should mention that I'm a
neophyte in many areas of MySQL, although I do have experience with
Oracle, Visual Fox Pro, MS Access, and limited experience with the
PHP/MySQL combo for personal web-page development. OTOH, I do understand
a bit about software engineering, SCADA, embedded control and the like,
and I "REALLY FRICKIN' MISS" assembly progin'. Damn OOP dorks :^) :^).
(Just pickin', folks. I love OOD -- but I do miss assembly!) :^)
> But if you want to shrink the tablespace file immediately, you must do
> the following:
> 1. Back up all InnoDB tables with mysqldump
> 2. Stop the MySQL Server
> 3. Physically remove the InnoDB tablespace file
> 4. Restart the MySQL server
> 5. Restore the backed-up tables by running the dump files through the
> mysql client.
Okay. Thanks for the info. I'll make sure that I save this info for
future reference. Thanks!
> If you run a 24/7 operation, taking the MySQL server down might not be
> practical. So you'll have to live with a big, partially-empty
> tablespace file for a while.
Yep. 24/7/365. This is the main reason I'm talking up MySQL Cluster
here at work. After I read about "five nines" availability, I knew
that our customers will eventually demand this.
> That shouldn't hurt anything, though some people will poke their head
> into the database and shout, "Hello! Echo!"
> to hear their voice come back at them. ;-)
Well, this is the oil biz we're talking about, so it'd be more like
shouting "Hello World!" down the well head, and if you're not lucky,
you might start a blowout from too much acoustic resonance or something.
:^)
> But do make sure to do the OPTIMIZE TABLE on each table, which will make
> the tables more efficient.
Did it the other day. Helped a bit, but not too much. Oh, well.
It's not like I didn't try or something :^).
> Regards,
> Bill K.
Later, Bill, and thanks a ton, dewd!
Randy
--
Randall Jouett
Amateur/Ham Radio: AB5NI
I eat spaghetti code out of a bit bucket while sitting at a hash table!
Once someone offered salt, but I said, "I don't do encryption!"
Re: Filtered Import From MySQL to MySQL
am 01.08.2006 01:20:06 von Bill Karwin
Randy wrote:
> program we use (called NaviCat) is a MySQL admin tool, and it does
> offer various switches when dealing with imports/exports; however,
> the program skipped class when it came to being able to filter on
> import/export. (Either that, or I haven't found it in the manual yet.)
A GUI tool that could handle all potential filtering tasks that any user
might need would necessarily be very complex. Imagine a GUI that could
generate joins, lookups, subqueries, and all sorts of AND/OR expressions
that you might want to put into the filter conditions. This GUI would
have so many menus and buttons that it would be unusable!
Fortunately, there is a simple, expressive language that we can use to
declare the filter conditions. It's even based on an ISO-specified
standard, and is supported by all major RDBMS brands. What more could
you want? ;-)
> Okay. I'll give this a shot and see what gives.
Let us know how it turns out!
> Yep. 24/7/365. This is the main reason I'm talking up MySQL Cluster
> here at work. After I read about "five nines" availability, I knew
> that our customers will eventually demand this.
"Five nines" means different things to different people. There's a huge
difference between "should be available" and "guaranteed to be available".
There's no way to achieve true five nines of availability without an
architecture that includes redundancy of every component, starting with
electrical power and including everything on top of that. You simply
can't offer that sort of guarantee with a single server. You need
multiple servers, multiple co-lo facilities, 24x7 staff, planned
hot-swapping between servers, etc.
But this is tangential to your original question. Best of luck!
Regards,
Bill K.
Re: Filtered Import From MySQL to MySQL
am 01.08.2006 04:58:38 von gordonb.916vg
>Randy wrote:
>> program we use (called NaviCat) is a MySQL admin tool, and it does
>> offer various switches when dealing with imports/exports; however,
>> the program skipped class when it came to being able to filter on
>> import/export. (Either that, or I haven't found it in the manual yet.)
>
>A GUI tool that could handle all potential filtering tasks that any user
>might need would necessarily be very complex. Imagine a GUI that could
>generate joins, lookups, subqueries, and all sorts of AND/OR expressions
>that you might want to put into the filter conditions. This GUI would
>have so many menus and buttons that it would be unusable!
There's also the problem of scalability. Such a GUI would need at
least one "Are you sure?" popup window per record. For many databases
this exceeds the mean time between failure for a mouse, to say nothing
of the user's fingers.
Re: Filtered Import From MySQL to MySQL
am 01.08.2006 05:14:02 von gordonb.xt0h8
>We have a web-based app that's _really_ slowing down because multiple
>clients are writing their own private data into a single, central
>database. I guess the previous programmer did things this way because it
>made things easy. Well, I'm the person that has to put up with
>the long-term headache.
Please explain what you mean by the term "database" here. Do
you want:
(a) Separate data in separate databases on the same host using the
same MySQL instance and sharing the same hardware, or
(b) Separate data on separate servers on potentially different continents
(so you don't need one super-system with dual 1THz CPUs, you can have
several systems with more modest specs and still solve the problem
by throwing more hardware at it)
>Anywho, someone at work wants things sped up, so what I'm looking
>at doing is this for each client:
>
>* Create a new, separate database for each client.
For (a) this should be easy, with the use of mysqladmin. For (b)
it involves installing a bunch of new systems.
>* Copy/Import their private data over this new database.
Just how is all this data mixed together? If it's already separated
by table, I don't see getting much speed improvement out of case (a).
>* Make necessary changes on the front end so each client
> can access this new database.
If you originally set each client to use a particular username/password/db,
and you avoided using database names in queries, and your new setup
doesn't share tables between clients, this should consist mostly
of changing the username/password/db used by each client, and
some privilege-granting.
>* Also looking into running MySQL Cluster for failover,
> etc. (This is an oil biz app, and we provide services
> 24, 7, 365...
>
>So, here's the problem: what I want to do is a filtered
>import from the central database to the new client database,
mysqldump lets you dump individual tables, and allows you to
specify a WHERE clause for selecting individual records in the
table. Doesn't this sound like what you want? Oh, yes, things
don't change much if the source and destination systems are
different.
>filtering out all other data except for the individual client's
>data. I guess I could write some SQL code to do this, but I'm
>figuring that this is going to take a while. What I'm hoping
>for is that there is a program out there that will take care of
>this for me.
Since you didn't tell us how the data is mixed between the applications,
and how much of it is shared, how could we possibly do that?
>BTW, the data will be going from one MySQL database
>to another MySQL database. Any help would be greatly appreciated!
Re: Filtered Import From MySQL to MySQL
am 01.08.2006 17:09:11 von Randy
Hi again, Bill, and thanks for the reply.
Bill Karwin wrote:
>
> A GUI tool that could handle all potential filtering tasks that any user
> might need would necessarily be very complex. Imagine a GUI that could
> generate joins, lookups, subqueries, and all sorts of AND/OR expressions
> that you might want to put into the filter conditions. This GUI would
> have so many menus and buttons that it would be unusable!
LOL! Visual FoxPro has something like this built into the IDE,
and it _is_ hard to use. :^)
> Fortunately, there is a simple, expressive language that we can use to
> declare the filter conditions. It's even based on an ISO-specified
> standard, and is supported by all major RDBMS brands. What more could
> you want? ;-)
LOL! Well, when the boss says "look for a GUI tool first," then that's
what I do :^).
>> Okay. I'll give this a shot and see what gives.
>
> Let us know how it turns out!
Okay.
>> Yep. 24/7/365. This is the main reason I'm talking up MySQL Cluster
>> here at work. After I read about "five nines" availability, I knew
>> that our customers will eventually demand this.
>
> "Five nines" means different things to different people. There's a huge
> difference between "should be available" and "guaranteed to be available".
We're doing near real-time logistics tracking for the oil biz. Some of
the stuff we track is worth big bucks, and oil companies love to know
where their "stuff" is at all times :^).
> There's no way to achieve true five nines of availability without an
> architecture that includes redundancy of every component, starting with
> electrical power and including everything on top of that. You simply
> can't offer that sort of guarantee with a single server. You need
> multiple servers, multiple co-lo facilities, 24x7 staff, planned
> hot-swapping between servers, etc.
Well, I'm not really sure if the data _has_ to be available at all
times, but I can tell you that it would make a lot of people happy
if it was.
> But this is tangential to your original question. Best of luck!
You too, dewd, and mega-ultra kudos to you and yours! :^)
Randall
P.S.
Give this link a shot and you'll see what we're up to, Bill:
http://www.offshorewarriors.com
Re: Filtered Import From MySQL to MySQL
am 01.08.2006 23:12:48 von Randy
Hi, Gordon, and thanks for the reply.
Gordon Burditt wrote:
>
> Please explain what you mean by the term "database" here. Do
> you want:
>
> (a) Separate data in separate databases on the same host using the
> same MySQL instance and sharing the same hardware, or
> (b) Separate data on separate servers on potentially different continents
> (so you don't need one super-system with dual 1THz CPUs, you can have
> several systems with more modest specs and still solve the problem
> by throwing more hardware at it)
"A" would be the answer :^).
>> Anywho, someone at work wants things sped up, so what I'm looking
>> at doing is this for each client:
>>
>> * Create a new, separate database for each client.
>
> For (a) this should be easy, with the use of mysqladmin. For (b)
> it involves installing a bunch of new systems.
Exactly. We're using "NaviCat" for this purpose under Winblows XP. :^)
BTW, we're using Visual FoxPro on the clients (front-end), with a
local database; PHP/MySQL 4.1 on the back-end.
>> * Copy/Import their private data over this new database.
>
> Just how is all this data mixed together? If it's already separated
> by table, I don't see getting much speed improvement out of case (a).
It's not separated by table, unfortunately, Gordon. Multiple
customers are sharing tables of the same type, with different
customer records interspersed throughout the table. Their the
same record type; just different data for different customers.
As far as case (a) goes:
Well, it's going to be the amount of data that's related to that
particular company, Gordon. Also, when we do a backup, we'll be
backing up data related to that particular company and that
particular company alone. We won't have to worry about "somebody
else's sensitive data is in there, so now we're going to have
to write some queries to get only Frank's data out of the database."
Stuff like that will be history. We'll do a backup, and mail a DVD
or three off to that particular company, and they'll be happy and
we'll be happy. :^)
Also, when one customer database goes down,only a few clients will be
affected. In the long run, we could have 20-30 clients per database on
each individual customer database. So, let's say one of our customers is
bitching about speed and what not, and we have every buffering technique
set in my.cnf and optimized the queries. Well, now we have the option of
moving that particular database to its own server, with the added
advantage of being able to use MySQL Cluster (if necessary). Not only
that, but doing all of this would not affect any other customer
database. The "everyone in one database, sharing tables" setup would
be horrendous in a situation where customer #1 client trashes the
database, yet everyone else whose out there gets the shaft, even
though all of their data is still intact.
Now, I'm not saying that this model is any better than any other,
but we just tested my method (with known, production field data),
and one of our customer's data sets was yanked out the central database,
put into it's own database, and a search that used to take 46 seconds
to complete now takes less than a half of a second. This speaks volumes
to me.
>> * Make necessary changes on the front end so each client
>> can access this new database.
>
> If you originally set each client to use a particular username/password/db,
> and you avoided using database names in queries, and your new setup
> doesn't share tables between clients, this should consist mostly
> of changing the username/password/db used by each client, and
> some privilege-granting.
>
>> * Also looking into running MySQL Cluster for failover,
>> etc. (This is an oil biz app, and we provide services
>> 24, 7, 365...
>>
>> So, here's the problem: what I want to do is a filtered
>> import from the central database to the new client database,
>
> mysqldump lets you dump individual tables, and allows you to
> specify a WHERE clause for selecting individual records in the
> table. Doesn't this sound like what you want? Oh, yes, things
> don't change much if the source and destination systems are
> different.
Kewlness. Sounds like what we want. Thanks, Gordon!
>> filtering out all other data except for the individual client's
>> data. I guess I could write some SQL code to do this, but I'm
>> figuring that this is going to take a while. What I'm hoping
>> for is that there is a program out there that will take care of
>> this for me.
>
> Since you didn't tell us how the data is mixed between the applications,
> and how much of it is shared, how could we possibly do that?
Good point. Actually, none of their data is shared, except by
being in a MySQL database that's used for web display. When
the data is being displayed on the web, no other company can
see any other companies data. This is accomplished via SQL
filtering on a key.
Thanks for your time and your response, Gordon
Randy