BULK DATA HANDLING 0.5TB
am 13.06.2009 07:40:32 von Krishna Chandra Prajapati
--001636e911ae71b9ba046c3443d0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Hi guys,
I'm working in a telecom company. I have table called deliverylog in which
30 million records gets inserted per/day. The table has grown to 0.5TB I
have to keep 60days record in the table. So, 60days * 30 million = 1800
million records. The query is taking a lot of time to fetch the result.
Please sugget me what storage engine must be used and how i can get the
things done. Is there any other alternative.
Any response is highly appreciated.
Thanks,
Krishna
--001636e911ae71b9ba046c3443d0--
Re: BULK DATA HANDLING 0.5TB
am 13.06.2009 12:13:27 von Martijn Engler
My first thought was: Archive Storage Engine;
http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engin e.html
But then I read "fetch the result", so you're not only inserting data,
but also doing some queries on it? Can you show the queries and your
model?
On Sat, Jun 13, 2009 at 07:40, Krishna Chandra
Prajapati wrote:
> Hi guys,
>
> I'm working in a telecom company. I have table called deliverylog in which
> 30 million records gets inserted per/day. The table has grown to 0.5TB I
> have to keep 60days record in the table. So, 60days * 30 million = 1800
> million records. The query is taking a lot of time to fetch the result.
>
> Please sugget me what storage engine must be used and how i can get the
> things done. Is there any other alternative.
>
> Any response is highly appreciated.
>
> Thanks,
> Krishna
>
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
Re: BULK DATA HANDLING 0.5TB
am 14.06.2009 02:19:57 von Steve
At 11:10 AM +0530 6/13/09, Krishna Chandra Prajapati wrote:
>Hi guys,
>
>I'm working in a telecom company. I have table called deliverylog in which
>30 million records gets inserted per/day. The table has grown to 0.5TB I
>have to keep 60days record in the table. So, 60days * 30 million = 1800
>million records. The query is taking a lot of time to fetch the result.
>
>Please sugget me what storage engine must be used and how i can get the
>things done. Is there any other alternative.
>
>Any response is highly appreciated.
>
>Thanks,
>Krishna
Can you provide us with more details about the current configuration? Eg,
MySQL version, current database engine, and the result of an EXPLAIN on
the problematic queries.
Just offhand, unless you need transactions/foreign keys/all the other
niceties of InnoDB, I would suspect MyISAM would be the fastest engine,
but hard to say for sure. There's a lot of room for performance
optimization with all of the system variables as well (eg; increasing key
buffers if you have adequate RAM). You can eke out more performance by
putting indexes and tables on different drives on different channels.
Some references:
Book: High Performance MySQL, Second Edition
http://oreilly.com/catalog/9780596101718/
Useful tips from the authors of the above book:
http://www.mysqlperformanceblog.com/
And assuming you are using MySQL 5.0:
Optimization Overview
http://dev.mysql.com/doc/refman/5.0/en/optimize-overview.htm l
Table OPTIMIZE command
http://dev.mysql.com/doc/refman/5.0/en/optimize-table.html
Using EXPLAIN
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
http://dev.mysql.com/doc/refman/5.0/en/explain.html
MySQL system variables
http://dev.mysql.com/doc/refman/5.0/en/mysqld-option-tables. html
steve
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
Re: BULK DATA HANDLING 0.5TB
am 19.06.2009 13:46:08 von walter harms
steve@edberg-online.com schrieb:
> At 11:10 AM +0530 6/13/09, Krishna Chandra Prajapati wrote:
>> Hi guys,
>>
>> I'm working in a telecom company. I have table called deliverylog in which
>> 30 million records gets inserted per/day. The table has grown to 0.5TB I
>> have to keep 60days record in the table. So, 60days * 30 million = 1800
>> million records. The query is taking a lot of time to fetch the result.
>>
>> Please sugget me what storage engine must be used and how i can get the
>> things done. Is there any other alternative.
>>
>> Any response is highly appreciated.
>>
>> Thanks,
>> Krishna
>
>
> Can you provide us with more details about the current configuration? Eg,
> MySQL version, current database engine, and the result of an EXPLAIN on
> the problematic queries.
>
> Just offhand, unless you need transactions/foreign keys/all the other
> niceties of InnoDB, I would suspect MyISAM would be the fastest engine,
> but hard to say for sure. There's a lot of room for performance
> optimization with all of the system variables as well (eg; increasing key
> buffers if you have adequate RAM). You can eke out more performance by
> putting indexes and tables on different drives on different channels.
>
> Some references:
>
> Book: High Performance MySQL, Second Edition
> http://oreilly.com/catalog/9780596101718/
>
> Useful tips from the authors of the above book:
> http://www.mysqlperformanceblog.com/
>
> And assuming you are using MySQL 5.0:
>
> Optimization Overview
> http://dev.mysql.com/doc/refman/5.0/en/optimize-overview.htm l
>
> Table OPTIMIZE command
> http://dev.mysql.com/doc/refman/5.0/en/optimize-table.html
>
> Using EXPLAIN
> http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
> http://dev.mysql.com/doc/refman/5.0/en/explain.html
>
> MySQL system variables
> http://dev.mysql.com/doc/refman/5.0/en/mysqld-option-tables. html
>
> steve
and take a look at partions (available with >=5.1), btw do not forget to force one-file-per-table
that make handling a lot more easy.
re,
wh
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org