Really big resultsets

Really big resultsets

am 09.05.2005 22:09:43 von Corey Tisdale

Hi all,

How do you deal with really big resultsets in such a manner that they do
not deny access to the tables for others? I have a situation where a
resultset could be anywhere from a few hundred to a hundred thousand
rows. Everything works fine speed-wise into the tens of thousands, but
once you get close to 6 figures there is a noticable speed hit. It seems
as though MySQL (v4.0.24) locks the tables as the result set is being
sent to the client, and I would much rather have the client just wait
for a little bit. I thought at first paging by using the limit 50000,
200 syntax would work (to go through 200 records at a time), but it
still gives me huge lapses in availability. What can I do to help keep
my tables available to all clients?

Thanks!
Corey

--
MySQL Windows Mailing List
For list archives: http://lists.mysql.com/win32
To unsubscribe: http://lists.mysql.com/win32?unsub=gcdmw-win32@m.gmane.org

Re: Really big resultsets

am 09.05.2005 23:57:03 von Robert Citek

On May 9, 2005, at 3:09 PM, Corey Tisdale wrote:
> How do you deal with really big resultsets in such a manner that
> they do not deny access to the tables for others? I have a
> situation where a resultset could be anywhere from a few hundred to
> a hundred thousand rows. Everything works fine speed-wise into the
> tens of thousands, but once you get close to 6 figures there is a
> noticable speed hit. It seems as though MySQL (v4.0.24) locks the
> tables as the result set is being sent to the client, and I would
> much rather have the client just wait for a little bit. I thought
> at first paging by using the limit 50000, 200 syntax would work (to
> go through 200 records at a time), but it still gives me huge
> lapses in availability. What can I do to help keep my tables
> available to all clients?

How are you connecting to the MySQL database? Directly (e.g. via the
MySQL client) or via ODBC or other? Can you post a code snippet of
what you are doing and the schema of the relevant tables?

Regards,
- Robert
http://www.cwelug.org/downloads
Help others get OpenSource software. Distribute FLOSS
for Windows, Linux, *BSD, and MacOS X with BitTorrent



--
MySQL Windows Mailing List
For list archives: http://lists.mysql.com/win32
To unsubscribe: http://lists.mysql.com/win32?unsub=gcdmw-win32@m.gmane.org

Re: Really big resultsets

am 10.05.2005 00:22:45 von Corey Tisdale

My problem happens from either the client or the mysql_connect
functions in php. The most simple structure I can duplicate it with is
this:

item
--------
id integer not null auto incremement
name varchar(255)
description text
model varchar(150)
price double
shipping double
active enum('Yes', 'No') default 'Yes'

indexes on id and model and active.


I have 450k rows in here, and even from the client doing a

select id from item where active='Yes' limit 200000, 1000

takes 4.35 seconds. The server load is next to nothing, and it is a
raid5 box with a gig of ram and dual xenons at 2.66ghz. It makes no
sense to me as to why it takes so long (cause I'm not even ordering by
anything). I think if I can understand this problem I can learn to
redesign my DB or my applications to avoid it, but it seems to me that
this should not take so long. In fact, if you do a

select id from item where id > 200000 and active='Yes' limit 1000

you get cut back to .5 seconds, and both of these times include network
time, so we are talking 4 seconds to get to the 200000th row via limit
vs where. This would be cool if I was always ordering by id, but I am
not. What can I change?

Thanks!
Corey



On May 9, 2005, at 4:57 PM, Robert Citek wrote:

>
> On May 9, 2005, at 3:09 PM, Corey Tisdale wrote:
>> How do you deal with really big resultsets in such a manner that they
>> do not deny access to the tables for others? I have a situation where
>> a resultset could be anywhere from a few hundred to a hundred
>> thousand rows. Everything works fine speed-wise into the tens of
>> thousands, but once you get close to 6 figures there is a noticable
>> speed hit. It seems as though MySQL (v4.0.24) locks the tables as the
>> result set is being sent to the client, and I would much rather have
>> the client just wait for a little bit. I thought at first paging by
>> using the limit 50000, 200 syntax would work (to go through 200
>> records at a time), but it still gives me huge lapses in
>> availability. What can I do to help keep my tables available to all
>> clients?
>
> How are you connecting to the MySQL database? Directly (e.g. via the
> MySQL client) or via ODBC or other? Can you post a code snippet of
> what you are doing and the schema of the relevant tables?
>
> Regards,
> - Robert
> http://www.cwelug.org/downloads
> Help others get OpenSource software. Distribute FLOSS
> for Windows, Linux, *BSD, and MacOS X with BitTorrent
>
>
>
> --
> MySQL Windows Mailing List
> For list archives: http://lists.mysql.com/win32
> To unsubscribe: http://lists.mysql.com/win32?unsub=corey@bbqguys.com
>


--
MySQL Windows Mailing List
For list archives: http://lists.mysql.com/win32
To unsubscribe: http://lists.mysql.com/win32?unsub=gcdmw-win32@m.gmane.org

Re: Really big resultsets

am 10.05.2005 16:04:32 von Corey Tisdale

Sure.

[mysqld]
basedir=3DC:/mysql
datadir=3DC:/mysql/data

###Memory settings
set-variable =3D query_cache_size=3D128M
set-variable =3D key_buffer=3D128M
set-variable =3D max_allowed_packet=3D1M
set-variable =3D table_cache=3D128
set-variable =3D sort_buffer=3D2M
set-variable =3D record_buffer=3D2M
set-variable =3D myisam_sort_buffer_size=3D64M

###Connection settings
set-variable =3D thread_cache=3D10
set-variable =3D thread_concurrency=3D4
set-variable =3D wait_timeout=3D30
set-variable =3D interactive_timeout=3D3600
set-variable =3D max_connections=3D200
set-variable =3D max_connect_errors=3D999999999

###Full text index settings
set-variable =3D ft_min_word_len=3D3


[WinMySQLadmin]
Server=3DC:/mysql/bin/mysqld-nt.exe


Corey


On May 9, 2005, at 5:39 PM, Ra=FAl D. Pitt=ED Palma wrote:
> can you post your my.cnf file ?
>
>
> Corey Tisdale wrote:
>> My problem happens from either the client or the mysql_connect=20
>> functions in php. The most simple structure I can duplicate it with=20=

>> is this:
>> item
>> --------
>> id integer not null auto incremement
>> name varchar(255)
>> description text
>> model varchar(150)
>> price double
>> shipping double
>> active enum('Yes', 'No') default 'Yes'
>> indexes on id and model and active.
>> I have 450k rows in here, and even from the client doing a
>> select id from item where active=3D'Yes' limit 200000, 1000
>> takes 4.35 seconds. The server load is next to nothing, and it is a=20=

>> raid5 box with a gig of ram and dual xenons at 2.66ghz. It makes no=20=

>> sense to me as to why it takes so long (cause I'm not even ordering=20=

>> by anything). I think if I can understand this problem I can learn to=20=

>> redesign my DB or my applications to avoid it, but it seems to me=20
>> that this should not take so long. In fact, if you do a
>> select id from item where id > 200000 and active=3D'Yes' limit 1000
>> you get cut back to .5 seconds, and both of these times include=20
>> network time, so we are talking 4 seconds to get to the 200000th row=20=

>> via limit vs where. This would be cool if I was always ordering by=20
>> id, but I am not. What can I change?
>> Thanks!
>> Corey
>> On May 9, 2005, at 4:57 PM, Robert Citek wrote:
>>>
>>> On May 9, 2005, at 3:09 PM, Corey Tisdale wrote:
>>>
>>>> How do you deal with really big resultsets in such a manner that=20
>>>> they do not deny access to the tables for others? I have a=20
>>>> situation where a resultset could be anywhere from a few hundred to=20=

>>>> a hundred thousand rows. Everything works fine speed-wise into the=20=

>>>> tens of thousands, but once you get close to 6 figures there is a=20=

>>>> noticable speed hit. It seems as though MySQL (v4.0.24) locks the=20=

>>>> tables as the result set is being sent to the client, and I would=20=

>>>> much rather have the client just wait for a little bit. I thought=20=

>>>> at first paging by using the limit 50000, 200 syntax would work (to=20=

>>>> go through 200 records at a time), but it still gives me huge=20
>>>> lapses in availability. What can I do to help keep my tables=20
>>>> available to all clients?
>>>
>>>
>>> How are you connecting to the MySQL database? Directly (e.g. via=20
>>> the MySQL client) or via ODBC or other? Can you post a code snippet=20=

>>> of what you are doing and the schema of the relevant tables?
>>>
>>> Regards,
>>> - Robert
>>> http://www.cwelug.org/downloads
>>> Help others get OpenSource software. Distribute FLOSS
>>> for Windows, Linux, *BSD, and MacOS X with BitTorrent
>>>
>>>
>>>
>>> --=20
>>> MySQL Windows Mailing List
>>> For list archives: http://lists.mysql.com/win32
>>> To unsubscribe: =20
>>> http://lists.mysql.com/win32?unsub=3Dcorey@bbqguys.com
>>>
>
> --=20
> --------------------
> Ra=FAl D. Pitt=ED Palma
> Associate
> Global Engineering and Technologies
> mobile (507)-616-0194
> office (507)-264-2362
> Republic of Panama
> www.globaltecsa.com


--
MySQL Windows Mailing List
For list archives: http://lists.mysql.com/win32
To unsubscribe: http://lists.mysql.com/win32?unsub=3Dgcdmw-win32@m.gmane.org