Table Sizes

Table Sizes

am 16.02.2006 23:47:29 von Bill Platt

I am wondering which is most effective.

Setting up a table with smaller pieces and
writing the software to search smaller tables,
OR
setting up one big table with all of the related
data inside of it?

I have a mysql database I am working with
that has 32 million records in it. I am wondering
if I would be better off to somehow break that
data into much smaller pieces so that there are
fewer rows, or whether it is just as effective to
keep everything in one table.

Maybe I should break it down alphabetically
or something.

The reason for my concern about this is when
I access the database through phpMyadmin, every
request takes long time to load up. Even minor
changes seem to be a real pain in the you know
what.

Speed within the site is of vast importance to me.

Thanks for your advice.

Bill


--
MySQL Perl Mailing List
For list archives: http://lists.mysql.com/perl
To unsubscribe: http://lists.mysql.com/perl?unsub=gcdmp-msql-mysql-modules@m .gmane.org

RE: Table Sizes

am 17.02.2006 00:10:59 von david.logan

Hi Bill,

You don't need to write your own software, you could explore the use of
MERGE tables (available at 4.?) but if you can hang out for a little
while, version 5.1 has a new scheme named partitioning that has greater
functionality and maybe what you are looking for.

For the moment though,
http://dev.mysql.com/doc/refman/5.0/en/merge-storage-engine. html may be
the way to go. This is totally independent from the DBI/DBD perl modules
so shouldn't affect any of your scripting. This is also available at
version 4.?

Regards


------------------------------------------------------------ ---
********** _/ ********** David Logan=20
******* _/ ******* ITO Delivery Specialist - Database
***** _/ ***** Hewlett-Packard Australia Ltd
**** _/_/_/ _/_/_/ **** E-Mail: david.logan@hp.com
**** _/ _/ _/ _/ **** Desk: +618 8408 4273
**** _/ _/ _/_/_/ **** Mobile: 0417 268 665
***** _/ ****** =20
****** _/ ******** Postal: 148 Frome Street,
******** _/ ********** Adelaide SA 5001
Australia=20
i n v e n t =20
------------------------------------------------------------ ---

-----Original Message-----
From: Bill Platt [mailto:bplatt@thephantomwriters.com]=20
Sent: Friday, 17 February 2006 9:17 AM
To: perl@lists.mysql.com
Subject: Table Sizes

I am wondering which is most effective.

Setting up a table with smaller pieces and
writing the software to search smaller tables,
OR
setting up one big table with all of the related
data inside of it?

I have a mysql database I am working with
that has 32 million records in it. I am wondering
if I would be better off to somehow break that
data into much smaller pieces so that there are
fewer rows, or whether it is just as effective to
keep everything in one table.

Maybe I should break it down alphabetically
or something.

The reason for my concern about this is when
I access the database through phpMyadmin, every
request takes long time to load up. Even minor
changes seem to be a real pain in the you know
what.

Speed within the site is of vast importance to me.

Thanks for your advice.

Bill


--=20
MySQL Perl Mailing List
For list archives: http://lists.mysql.com/perl
To unsubscribe: =
http://lists.mysql.com/perl?unsub=3Ddavid.logan@hp.com


--
MySQL Perl Mailing List
For list archives: http://lists.mysql.com/perl
To unsubscribe: http://lists.mysql.com/perl?unsub=3Dgcdmp-msql-mysql-modules @m.gmane.org

Re: Table Sizes

am 17.02.2006 00:28:48 von Jan Kratochvil

Hi,

On Thu, 16 Feb 2006 23:47:29 +0100, Bill Platt wrote:
....
> I have a mysql database I am working with that has 32 million records in it.
....
> Maybe I should break it down alphabetically or something.

This goes against the meaning of using SQL server - it optimizes this itself.

....
> The reason for my concern about this is when I access the database through
> phpMyadmin,

I do not use phpMyadmin myself, I find mysql(1) client more comfortable.
Still: Do you really have the SQL indexes properly setup even for the accesses
by phpMyadmin? The time is consumed by MySQL or phpMyAdmin (httpd/PHP)?
You can also access query limited by parts FIELD like 'f%' without modifying
the tables structure, can't you? Sure the proper index should be setup and this
approach would be definitely just a phpMyadmin workaround.

Sorry for not knowing the phpMyadmin problems myself, I find you should even be
able to use mysql-query-browser and mysql-administrator as GUI interfaces.


Regards,
Lace

--
MySQL Perl Mailing List
For list archives: http://lists.mysql.com/perl
To unsubscribe: http://lists.mysql.com/perl?unsub=gcdmp-msql-mysql-modules@m .gmane.org

Re: Table Sizes

am 17.02.2006 03:27:57 von Greg Meckes

In general, it would make sense to logically break them out as it makes sense with the related
data. A "catch-all" table throws away the very relational table concept so it obviously would make
sense to have related tables.

A table/db with 32 million records should be optimized.

For example, I would think that there's a few fields in there that could be broken out into their
own tables by ID as well as other data.

The savings in speed and size would be very significant. To take a field that might be 20 bytes
and replace it with a 5 byte ID x 32,000,000 would be a huge savings in space for that one field
alone.

Greg
--- Bill Platt wrote:

> I am wondering which is most effective.
>
> Setting up a table with smaller pieces and
> writing the software to search smaller tables,
> OR
> setting up one big table with all of the related
> data inside of it?
>
> I have a mysql database I am working with
> that has 32 million records in it. I am wondering
> if I would be better off to somehow break that
> data into much smaller pieces so that there are
> fewer rows, or whether it is just as effective to
> keep everything in one table.
>
> Maybe I should break it down alphabetically
> or something.
>
> The reason for my concern about this is when
> I access the database through phpMyadmin, every
> request takes long time to load up. Even minor
> changes seem to be a real pain in the you know
> what.
>
> Speed within the site is of vast importance to me.
>
> Thanks for your advice.
>
> Bill
>
>
> --
> MySQL Perl Mailing List
> For list archives: http://lists.mysql.com/perl
> To unsubscribe: http://lists.mysql.com/perl?unsub=gregmeckes@yahoo.com
>
>


--
MySQL Perl Mailing List
For list archives: http://lists.mysql.com/perl
To unsubscribe: http://lists.mysql.com/perl?unsub=gcdmp-msql-mysql-modules@m .gmane.org

AW: <<Spam-Verdacht>>RE: Table Sizes

am 17.02.2006 11:39:28 von Gisbert.Selke

Hi Bill --

> -----Ursprüngliche Nachricht-----
> Von: Logan, David (SST - Adelaide) [mailto:David.Logan@hp.com]
> Gesendet: Freitag, 17. Februar 2006 00:11
> For the moment though,
> http://dev.mysql.com/doc/refman/5.0/en/merge-storage-engine. ht
> ml may be
> the way to go. This is totally independent from the DBI/DBD=20
> perl modules
> so shouldn't affect any of your scripting. This is also available at
> version 4.?
I'll second David's comments: MERGE tables are minimum effort to set =
up,
they give you the whole ease of use like one big table (if speed ist =
not at
a premium for some individual task), and accessing the individual =
compenent
tables give you maximum speed (but less convenience if you want to =
access
data that happen to be distributed over the tables).=20

It would be worthwhile spending some thought on how you partition the =
data.
In our case we have upo to some 200 million records in the individual
tables, and currently eight such tables go into one MERGE table. We can
easily divide these by years (part of the primary key), so if we access =
just
one year's data, we go to the individual table.

As a matter of fact, for our standard application which uses Perl DBI =
to
generate and execute queries, we find it faster for multiple-year =
queries to
generatea UNION SELECT on multiple indivdual tables, than to access the
MERGE table directly. Since the queries are programme-generated anyway,
that's not much of a hassle, but if you manually build your queries, =
you'd
spend more time on typing the UNION than waiting for the engine to =
access
the MERGE table.

Of course, all this assumes that you have optimized your indexes in the
first place. But even then, partitioning does make sense, no matter =
what the
pure theory of SQL may say.

\Gisbert

--
MySQL Perl Mailing List
For list archives: http://lists.mysql.com/perl
To unsubscribe: http://lists.mysql.com/perl?unsub=3Dgcdmp-msql-mysql-modules @m.gmane.org