50M records each year, help me choosing the stretegy
50M records each year, help me choosing the stretegy
am 02.11.2009 10:29:50 von sudhir543-nimavat
--0-1815892494-1257154190=:72070
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
I have come across a requirement where I need to store a very large amount =
of data in a table. =0AIn=0Aone of our app.. we can have around 50 Million =
records each year.. Can=0Aany one guide me in choosing a strategy than can =
handle this load.
=0AThanks=0ASN=0A =0A =0ASudhir NimavatSe=
nior software engineer. =0AQuick start global PVT LTD.=0ABaroda - 390007=0A=
Gujarat, India
Personally I'm always ready to learn, although I do not=
always like being taught
=0A Keep up with people you care about =
with Yahoo! India Mail. Learn how. http://in.overview.mail.yahoo.com/connec=
tmore
--0-1815892494-1257154190=:72070--
Re: 50M records each year, help me choosing the stretegy
am 02.11.2009 10:34:36 von Jay Ess
sudhir543-nimavat@yahoo.com wrote:
> I have come across a requirement where I need to store a very large amount of data in a table.
> In
> one of our app.. we can have around 50 Million records each year.. Can
> any one guide me in choosing a strategy than can handle this load.
>
50M records is not that bad if you only store a couple of bytes in every
row. So please describe your tables in more detail.
And also describe the expected access on the data.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
Fw: 50M records each year, help me choosing the stretegy
am 02.11.2009 11:49:21 von sudhir_nimavat
--0-616418360-1257158961=:81261
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Its a very simple table that will store messages.
Each message would h=
ave subject, body,to_user_id, from_user_id,is_deleted,date_sent and may be =
one or two more column
There would be frequent read operation and slow=
write operation.
Thanks=0ASN
=0A =0A
Personally I'=
m always ready to learn, although I do not always like being taught=0A =0A=
=0A----- Forwarded Message ----=0AFrom: "sudhir543-nimavat@yahoo.com"=
=0ATo: Jay Ess =0ASent:=
Mon, 2 November, 2009 3:34:52 PM=0ASubject: Re: 50M records each year, hel=
p me choosing the stretegy
=0AIts a very simple table that will store =
messages.
Each message would have subject, body,to_user_id, from_user_=
id,is_deleted,date_sent and may be one or two more column
There would =
be frequent read operation and slow write operation.
Thanks=0ASN
=
=0A =0A
Personally I'm always ready to learn, although I do not =
always like being taught=0A
=0A_______________________________=
_=0AFrom: Jay Ess =0ACc: Mysql
>=0ASent: Mon, 2 November, 2009 3:04:36 PM=0ASubject: Re: 50M records each =
year, help me choosing the stretegy
sudhir543-nimavat@yahoo.com wrote:=
=0A> I have come across a requirement where I need to store a very large am=
ount of data in a table. In=0A> one of our app.. we can have around 50 Mill=
ion records each year.. Can=0A> any one guide me in choosing a strategy tha=
n can handle this load. =0A50M records is not that bad if you only store a=
couple of bytes in every row. So please describe your tables in more detai=
l.=0AAnd also describe the expected access on the data.
-- MySQL Gener=
al Mailing List=0AFor list archives: http://lists.mysql.com/mysql=0ATo unsu=
bscribe: http://lists.mysql.com/mysql?unsub=3Dsudhir543-nimavat@yahoo .co=
m
=0A________________________________=0A Yahoo! India has a new look. =
Take a sneak peek.
=0A Keep up with people you care about with Ya=
hoo! India Mail. Learn how. http://in.overview.mail.yahoo.com/connectmore
--0-616418360-1257158961=:81261--
Re: Fw: 50M records each year, help me choosing the stretegy
am 02.11.2009 12:04:51 von Walter Heck
Sudhir: do yourself a favr and split the blobs (=3Dbody) off to a
different table. Most fo the time bodies are not used, only when the
actual email needs to be shown. That means that you can keep the
frequently used fields together in a table for much quicker access.
cheers,
Walter
On Mon, Nov 2, 2009 at 17:49, || Sudhir Nimavat ||
wrote:
> Its a very simple table that will store messages.
>
> Each message would have subject, body,to_user_id, from_user_id,is_deleted=
,date_sent and may be one or two more column
>
> There would be frequent read operation and slow write operation.
>
> Thanks
> SN
>
>
>
>
>
>
>
> Personally I'm always ready to learn, although I do not always like being=
taught
>
>
>
>
> ----- Forwarded Message ----
> From: "sudhir543-nimavat@yahoo.com"
> To: Jay Ess
> Sent: Mon, 2 November, 2009 3:34:52 PM
> Subject: Re: 50M records each year, help me choosing the stretegy
>
>
> Its a very simple table that will store messages.
>
> Each message would have subject, body,to_user_id, from_user_id,is_deleted=
,date_sent and may be one or two more column
>
> There would be frequent read operation and slow write operation.
>
> Thanks
> SN
>
>
>
>
>
> Personally I'm always ready to learn, although I do not always like being=
taught
>
>
>
>
>
> ________________________________
> From: Jay Ess
> Cc: Mysql
> Sent: Mon, 2 November, 2009 3:04:36 PM
> Subject: Re: 50M records each year, help me choosing the stretegy
>
> sudhir543-nimavat@yahoo.com wrote:
>> I have come across a requirement where I need to store a very large amou=
nt of data in a table. In
>> one of our app.. we can have around 50 Million records each year.. Can
>> any one guide me in choosing a strategy than can handle this load.
> 50M records is not that bad if you only store a couple of bytes in every =
row. So please describe your tables in more detail.
> And also describe the expected access on the data.
>
> -- MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: Â Â http://lists.mysql.com/mysql?unsub=3Dsudhir5=
43-nimavat@yahoo.com
>
>
> ________________________________
> Â Yahoo! India has a new look. Take a sneak peek.
>
>
> Â Â Â Keep up with people you care about with Yahoo! India =
Mail. Learn how. http://in.overview.mail.yahoo.com/connectmore
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg
RE: 50M records each year, help me choosing the stretegy
am 02.11.2009 23:38:09 von Daevid Vincent
Sudhir, please remove the || from your name in your email: "|| Sudhir
Nimavat ||"
We have about 1 BILLION rows of data here and we do a few tricks which may
or may not be of interest to you.
For starters, setup replication and write to a master and have at least 2
slaves. This makes backups MUCH easier as aside from having redundancy
across 3 servers, you can actually take a slave down, backup, then restore
it as part of the cluster again with no down time for the whole system/web.
Another trick we do is this: Since we have hundreds of thousands of 'events'
coming in per offload, we store the start and end ID of the events for more
detailed reports etc. but pre-process into aggregate tables the data we need
(including tallys etc). This requires some table locking so the inserts are
guaranteed sequential, but it saves us a stupid join for thousands of rows
we really don't need. Plus it also gives us a nice PK integer range to use
when we do need it.
If you don't REALLY need foreign keys and transactions and all that other
bloat, then use MYISAM tables. They're much faster and they also have the
nice benefit of "SELECT COUNT(*)" being accurate. INNODB is NOT accurate and
requires a table scan to determine the true number. Lame I know.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org