Slooooow query in MySQL.
am 19.07.2007 23:19:19 von Rob Adams
I have a query that I run using mysql that returns about 60,000 plus rows.
It's been so large that I've just been testing it with a limit 0, 10000 (ten
thousand) on the query. That used to take about 10 minutes to run,
including processing time in PHP which spits out xml from the query. I
decided to chunk the query down into 1,000 row increments, and tried that.
The script processed 10,000 rows in 23 seconds! I was amazed! But
unfortunately it takes quite a bit longer than 6*23 to process the 60,000
rows that way (1,000 at a time). It takes almost 8 minutes. I can't figure
out why it takes so long, or how to make it faster. The data for 60,000
rows is about 120mb, so I would prefer not to use a temporary table. Any
other suggestions? This is probably more a db issue than a php issue, but I
thought I'd try here first.
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 20.07.2007 00:45:37 von Kevin Murphy
--Apple-Mail-18-419507598
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
charset=US-ASCII;
delsp=yes;
format=flowed
Seeing the query would help.
Are you using sub-queries? I believe that those can make the time go
up exponentially.
--
Kevin Murphy
Webmaster: Information and Marketing Services
Western Nevada College
www.wnc.edu
775-445-3326
P.S. Please note that my e-mail and website address have changed from
wncc.edu to wnc.edu.
On Jul 19, 2007, at 2:19 PM, Rob Adams wrote:
> I have a query that I run using mysql that returns about 60,000
> plus rows. It's been so large that I've just been testing it with a
> limit 0, 10000 (ten thousand) on the query. That used to take
> about 10 minutes to run, including processing time in PHP which
> spits out xml from the query. I decided to chunk the query down
> into 1,000 row increments, and tried that. The script processed
> 10,000 rows in 23 seconds! I was amazed! But unfortunately it
> takes quite a bit longer than 6*23 to process the 60,000 rows that
> way (1,000 at a time). It takes almost 8 minutes. I can't figure
> out why it takes so long, or how to make it faster. The data for
> 60,000 rows is about 120mb, so I would prefer not to use a
> temporary table. Any other suggestions? This is probably more a
> db issue than a php issue, but I thought I'd try here first.
> --
> PHP Database Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
--Apple-Mail-18-419507598--
Re: Slooooow query in MySQL.
am 20.07.2007 01:12:03 von dmagick
Rob Adams wrote:
> I have a query that I run using mysql that returns about 60,000 plus
> rows. It's been so large that I've just been testing it with a limit 0,
> 10000 (ten thousand) on the query. That used to take about 10 minutes
> to run, including processing time in PHP which spits out xml from the
> query. I decided to chunk the query down into 1,000 row increments, and
> tried that. The script processed 10,000 rows in 23 seconds! I was
> amazed! But unfortunately it takes quite a bit longer than 6*23 to
> process the 60,000 rows that way (1,000 at a time). It takes almost 8
> minutes. I can't figure out why it takes so long, or how to make it
> faster. The data for 60,000 rows is about 120mb, so I would prefer not
> to use a temporary table. Any other suggestions? This is probably more
> a db issue than a php issue, but I thought I'd try here first.
Sounds like missing indexes or something.
Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
--
Postgresql & php tutorials
http://www.designmagick.com/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 20.07.2007 12:05:19 von Oskar
Rob Adams napsal(a):
> I have a query that I run using mysql that returns about 60,000 plus
> rows. It's been so large that I've just been testing it with a limit
> 0, 10000 (ten thousand) on the query. That used to take about 10
> minutes to run, including processing time in PHP which spits out xml
> from the query. I decided to chunk the query down into 1,000 row
> increments, and tried that. The script processed 10,000 rows in 23
> seconds! I was amazed! But unfortunately it takes quite a bit longer
> than 6*23 to process the 60,000 rows that way (1,000 at a time). It
> takes almost 8 minutes. I can't figure out why it takes so long, or
> how to make it faster. The data for 60,000 rows is about 120mb, so I
> would prefer not to use a temporary table. Any other suggestions?
> This is probably more a db issue than a php issue, but I thought I'd
> try here first.
60k rows is not that much, I have tables with 500k rows and queries are
running smoothly.
Anyway we cannot help you if you do not post:
1. "show create table"
2. result of "explain query"
3. the query itself
OKi98
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 20.07.2007 12:11:41 von Aleksandar Vojnovic
60k records shouldn't be a problem. Show us the query you're making and
the table structure.
OKi98 wrote:
> Rob Adams napsal(a):
>> I have a query that I run using mysql that returns about 60,000 plus
>> rows. It's been so large that I've just been testing it with a limit
>> 0, 10000 (ten thousand) on the query. That used to take about 10
>> minutes to run, including processing time in PHP which spits out xml
>> from the query. I decided to chunk the query down into 1,000 row
>> increments, and tried that. The script processed 10,000 rows in 23
>> seconds! I was amazed! But unfortunately it takes quite a bit
>> longer than 6*23 to process the 60,000 rows that way (1,000 at a
>> time). It takes almost 8 minutes. I can't figure out why it takes
>> so long, or how to make it faster. The data for 60,000 rows is about
>> 120mb, so I would prefer not to use a temporary table. Any other
>> suggestions? This is probably more a db issue than a php issue, but
>> I thought I'd try here first.
> 60k rows is not that much, I have tables with 500k rows and queries
> are running smoothly.
>
> Anyway we cannot help you if you do not post:
> 1. "show create table"
> 2. result of "explain query"
> 3. the query itself
>
> OKi98
>
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 20.07.2007 13:26:55 von Stut
Chris wrote:
> Rob Adams wrote:
>> I have a query that I run using mysql that returns about 60,000 plus
>> rows. It's been so large that I've just been testing it with a limit
>> 0, 10000 (ten thousand) on the query. That used to take about 10
>> minutes to run, including processing time in PHP which spits out xml
>> from the query. I decided to chunk the query down into 1,000 row
>> increments, and tried that. The script processed 10,000 rows in 23
>> seconds! I was amazed! But unfortunately it takes quite a bit longer
>> than 6*23 to process the 60,000 rows that way (1,000 at a time). It
>> takes almost 8 minutes. I can't figure out why it takes so long, or
>> how to make it faster. The data for 60,000 rows is about 120mb, so I
>> would prefer not to use a temporary table. Any other suggestions?
>> This is probably more a db issue than a php issue, but I thought I'd
>> try here first.
>
> Sounds like missing indexes or something.
>
> Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
If that were the case I wouldn't expect limiting the number of rows
returned to make a difference since the actual query is the same.
Chances are it's purely a data transfer delay. Do a test with the same
query but only grab one of the fields - something relative small like a
integer field - and see if that's significantly quicker. I'm betting it
will be.
If that is the problem you need to be looking at making sure you're only
getting the fields you need. You may also want to look into changing the
cursor type you're using although I'm not sure if that's possible with
MySQL nevermind how to do it.
-Stut
--
http://stut.net/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 23.07.2007 05:01:30 von dmagick
Stut wrote:
> Chris wrote:
>> Rob Adams wrote:
>>> I have a query that I run using mysql that returns about 60,000 plus
>>> rows. It's been so large that I've just been testing it with a limit
>>> 0, 10000 (ten thousand) on the query. That used to take about 10
>>> minutes to run, including processing time in PHP which spits out xml
>>> from the query. I decided to chunk the query down into 1,000 row
>>> increments, and tried that. The script processed 10,000 rows in 23
>>> seconds! I was amazed! But unfortunately it takes quite a bit
>>> longer than 6*23 to process the 60,000 rows that way (1,000 at a
>>> time). It takes almost 8 minutes. I can't figure out why it takes
>>> so long, or how to make it faster. The data for 60,000 rows is about
>>> 120mb, so I would prefer not to use a temporary table. Any other
>>> suggestions? This is probably more a db issue than a php issue, but
>>> I thought I'd try here first.
>>
>> Sounds like missing indexes or something.
>>
>> Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
>
> If that were the case I wouldn't expect limiting the number of rows
> returned to make a difference since the actual query is the same.
Actually it can. I don't think mysql does this but postgresql does take
the limit/offset clauses into account when generating a plan.
http://www.postgresql.org/docs/current/static/sql-select.htm l#SQL-LIMIT
Not really relevant to the problem though :P
--
Postgresql & php tutorials
http://www.designmagick.com/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 23.07.2007 11:40:12 von Stut
Chris wrote:
> Stut wrote:
>> Chris wrote:
>>> Rob Adams wrote:
>>>> I have a query that I run using mysql that returns about 60,000 plus
>>>> rows. It's been so large that I've just been testing it with a limit
>>>> 0, 10000 (ten thousand) on the query. That used to take about 10
>>>> minutes to run, including processing time in PHP which spits out xml
>>>> from the query. I decided to chunk the query down into 1,000 row
>>>> increments, and tried that. The script processed 10,000 rows in 23
>>>> seconds! I was amazed! But unfortunately it takes quite a bit
>>>> longer than 6*23 to process the 60,000 rows that way (1,000 at a
>>>> time). It takes almost 8 minutes. I can't figure out why it takes
>>>> so long, or how to make it faster. The data for 60,000 rows is
>>>> about 120mb, so I would prefer not to use a temporary table. Any
>>>> other suggestions? This is probably more a db issue than a php
>>>> issue, but I thought I'd try here first.
>>>
>>> Sounds like missing indexes or something.
>>>
>>> Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
>>
>> If that were the case I wouldn't expect limiting the number of rows
>> returned to make a difference since the actual query is the same.
>
> Actually it can. I don't think mysql does this but postgresql does take
> the limit/offset clauses into account when generating a plan.
>
> http://www.postgresql.org/docs/current/static/sql-select.htm l#SQL-LIMIT
>
> Not really relevant to the problem though :P
How many queries do you run with an order? But you're right, if there is
no order by clause adding a limit probably will make a difference, but
there must be an order by when you use limit to ensure the SQL engine
doesn't give you the same rows in response to more than one of the queries.
-Stut
--
http://stut.net/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 23.07.2007 11:46:01 von Stut
Stut wrote:
> Chris wrote:
>> Stut wrote:
>>> Chris wrote:
>>>> Rob Adams wrote:
>>>>> I have a query that I run using mysql that returns about 60,000
>>>>> plus rows. It's been so large that I've just been testing it with a
>>>>> limit 0, 10000 (ten thousand) on the query. That used to take
>>>>> about 10 minutes to run, including processing time in PHP which
>>>>> spits out xml from the query. I decided to chunk the query down
>>>>> into 1,000 row increments, and tried that. The script processed
>>>>> 10,000 rows in 23 seconds! I was amazed! But unfortunately it
>>>>> takes quite a bit longer than 6*23 to process the 60,000 rows that
>>>>> way (1,000 at a time). It takes almost 8 minutes. I can't figure
>>>>> out why it takes so long, or how to make it faster. The data for
>>>>> 60,000 rows is about 120mb, so I would prefer not to use a
>>>>> temporary table. Any other suggestions? This is probably more a
>>>>> db issue than a php issue, but I thought I'd try here first.
>>>>
>>>> Sounds like missing indexes or something.
>>>>
>>>> Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
>>>
>>> If that were the case I wouldn't expect limiting the number of rows
>>> returned to make a difference since the actual query is the same.
>>
>> Actually it can. I don't think mysql does this but postgresql does
>> take the limit/offset clauses into account when generating a plan.
>>
>> http://www.postgresql.org/docs/current/static/sql-select.htm l#SQL-LIMIT
>>
>> Not really relevant to the problem though :P
>
> How many queries do you run with an order? But you're right, if there is
> no order by clause adding a limit probably will make a difference, but
> there must be an order by when you use limit to ensure the SQL engine
> doesn't give you the same rows in response to more than one of the queries.
Oops, that was meant to say "How many queries do you run *without* an
order?"
-Stut
--
http://stut.net/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 23.07.2007 17:20:56 von Rob Adams
select h.addr, h.city, h.county, h.state, h.zip, 'yes' as show_prop,
h.askingprice, '' as year_built, h.rooms, h.baths,
'' as apt, '' as lot, h.sqft, h.listdate, '' as date_sold, h.comments,
h.mlsnum,
r.agency, concat(r.fname, ' ', r.lname) as rname,
r.phone as rphone, '' as remail, '' as status, '' as prop_type,
ts.TSCNfile as picture,
h.homeid as homeid, 'yes' as has_virt
from ProductStatus ps, home h, realtor r, ProductBin pb
left join TourScene ts on ts.TSCNtourId = pb.PBINid and ts.TSCN_MEDIAid
= '3'
where ps.PSTSstatus = 'posted' and pb.PBINid = PSTS_POid and h.id =
pb.PBINid
and h.listdate > DATE_SUB(NOW(), INTERVAL 2 YEAR)
and (h.homeid is not null and h.homeid <> '')
and r.realtorid = pb.PBIN_HALOid limit {l1}, {l2}
Here is the query. I didn't know that it needed to have an ORDER clause in
it for the limit to work properly. I'll probably order by h.listdate
-- Rob
"Stut" wrote in message
news:46A4777C.9040000@gmail.com...
> Chris wrote:
>> Stut wrote:
>>> Chris wrote:
>>>> Rob Adams wrote:
>>>>> I have a query that I run using mysql that returns about 60,000 plus
>>>>> rows. It's been so large that I've just been testing it with a limit
>>>>> 0, 10000 (ten thousand) on the query. That used to take about 10
>>>>> minutes to run, including processing time in PHP which spits out xml
>>>>> from the query. I decided to chunk the query down into 1,000 row
>>>>> increments, and tried that. The script processed 10,000 rows in 23
>>>>> seconds! I was amazed! But unfortunately it takes quite a bit longer
>>>>> than 6*23 to process the 60,000 rows that way (1,000 at a time). It
>>>>> takes almost 8 minutes. I can't figure out why it takes so long, or
>>>>> how to make it faster. The data for 60,000 rows is about 120mb, so I
>>>>> would prefer not to use a temporary table. Any other suggestions?
>>>>> This is probably more a db issue than a php issue, but I thought I'd
>>>>> try here first.
>>>>
>>>> Sounds like missing indexes or something.
>>>>
>>>> Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
>>>
>>> If that were the case I wouldn't expect limiting the number of rows
>>> returned to make a difference since the actual query is the same.
>>
>> Actually it can. I don't think mysql does this but postgresql does take
>> the limit/offset clauses into account when generating a plan.
>>
>> http://www.postgresql.org/docs/current/static/sql-select.htm l#SQL-LIMIT
>>
>> Not really relevant to the problem though :P
>
> How many queries do you run with an order? But you're right, if there is
> no order by clause adding a limit probably will make a difference, but
> there must be an order by when you use limit to ensure the SQL engine
> doesn't give you the same rows in response to more than one of the
> queries.
>
> -Stut
>
> --
> http://stut.net/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 24.07.2007 01:48:22 von dmagick
Stut wrote:
> Stut wrote:
>> Chris wrote:
>>> Stut wrote:
>>>> Chris wrote:
>>>>> Rob Adams wrote:
>>>>>> I have a query that I run using mysql that returns about 60,000
>>>>>> plus rows. It's been so large that I've just been testing it with
>>>>>> a limit 0, 10000 (ten thousand) on the query. That used to take
>>>>>> about 10 minutes to run, including processing time in PHP which
>>>>>> spits out xml from the query. I decided to chunk the query down
>>>>>> into 1,000 row increments, and tried that. The script processed
>>>>>> 10,000 rows in 23 seconds! I was amazed! But unfortunately it
>>>>>> takes quite a bit longer than 6*23 to process the 60,000 rows that
>>>>>> way (1,000 at a time). It takes almost 8 minutes. I can't figure
>>>>>> out why it takes so long, or how to make it faster. The data for
>>>>>> 60,000 rows is about 120mb, so I would prefer not to use a
>>>>>> temporary table. Any other suggestions? This is probably more a
>>>>>> db issue than a php issue, but I thought I'd try here first.
>>>>>
>>>>> Sounds like missing indexes or something.
>>>>>
>>>>> Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html
>>>>
>>>> If that were the case I wouldn't expect limiting the number of rows
>>>> returned to make a difference since the actual query is the same.
>>>
>>> Actually it can. I don't think mysql does this but postgresql does
>>> take the limit/offset clauses into account when generating a plan.
>>>
>>> http://www.postgresql.org/docs/current/static/sql-select.htm l#SQL-LIMIT
>>>
>>> Not really relevant to the problem though :P
>>
>> How many queries do you run with an order? But you're right, if there
>> is no order by clause adding a limit probably will make a difference,
>> but there must be an order by when you use limit to ensure the SQL
>> engine doesn't give you the same rows in response to more than one of
>> the queries.
>
> Oops, that was meant to say "How many queries do you run *without* an
> order?"
Almost never - but my point was actually this sentence:
The query planner takes LIMIT into account when generating a query plan,
so you are very likely to get different plans (yielding different row
orders) depending on what you use for LIMIT and OFFSET.
--
Postgresql & php tutorials
http://www.designmagick.com/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 24.07.2007 01:54:51 von dmagick
Rob Adams wrote:
>
> select h.addr, h.city, h.county, h.state, h.zip, 'yes' as show_prop,
> h.askingprice, '' as year_built, h.rooms, h.baths,
> '' as apt, '' as lot, h.sqft, h.listdate, '' as date_sold,
> h.comments, h.mlsnum,
> r.agency, concat(r.fname, ' ', r.lname) as rname,
> r.phone as rphone, '' as remail, '' as status, '' as prop_type,
> ts.TSCNfile as picture,
> h.homeid as homeid, 'yes' as has_virt
> from ProductStatus ps, home h, realtor r, ProductBin pb
> left join TourScene ts on ts.TSCNtourId = pb.PBINid and
> ts.TSCN_MEDIAid = '3'
> where ps.PSTSstatus = 'posted' and pb.PBINid = PSTS_POid and h.id =
> pb.PBINid
> and h.listdate > DATE_SUB(NOW(), INTERVAL 2 YEAR)
> and (h.homeid is not null and h.homeid <> '')
> and r.realtorid = pb.PBIN_HALOid limit {l1}, {l2}
>
> Here is the query. I didn't know that it needed to have an ORDER clause
> in it for the limit to work properly. I'll probably order by h.listdate
If you don't have an ORDER BY clause then you're going to get
inconsistent results. The database will never guarantee returning
results in a set order unless you tell it to by specifying an order by
clause.
To speed up your query, make sure you have indexes on:
TourScene(TSCNtourId, TSCN_MEDIAid)
ProductBin(PBINid, PBIN_HALOid)
home(id, listdate)
realtor(realtorid)
If you can't get it fast, then post the EXPLAIN output.
--
Postgresql & php tutorials
http://www.designmagick.com/
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Slooooow query in MySQL.
am 24.07.2007 08:26:08 von Aleksandar Vojnovic
In addition to Chris's suggestions you should also alter the homeid
column (set default to NULL and update the whole database which
shouldn't be a problem) so you don't have to do a double check on the
same column. I would also suggest that the TSCN_MEDIAid column should be
an int not a varchar.
Aleksander
Chris wrote:
> Rob Adams wrote:
>>
>> select h.addr, h.city, h.county, h.state, h.zip, 'yes' as show_prop,
>> h.askingprice, '' as year_built, h.rooms, h.baths,
>> '' as apt, '' as lot, h.sqft, h.listdate, '' as date_sold,
>> h.comments, h.mlsnum,
>> r.agency, concat(r.fname, ' ', r.lname) as rname,
>> r.phone as rphone, '' as remail, '' as status, '' as prop_type,
>> ts.TSCNfile as picture,
>> h.homeid as homeid, 'yes' as has_virt
>> from ProductStatus ps, home h, realtor r, ProductBin pb
>> left join TourScene ts on ts.TSCNtourId = pb.PBINid and
>> ts.TSCN_MEDIAid = '3'
>> where ps.PSTSstatus = 'posted' and pb.PBINid = PSTS_POid and h.id
>> = pb.PBINid
>> and h.listdate > DATE_SUB(NOW(), INTERVAL 2 YEAR)
>> and (h.homeid is not null and h.homeid <> '')
>> and r.realtorid = pb.PBIN_HALOid limit {l1}, {l2}
>>
>> Here is the query. I didn't know that it needed to have an ORDER
>> clause in it for the limit to work properly. I'll probably order by
>> h.listdate
>
> If you don't have an ORDER BY clause then you're going to get
> inconsistent results. The database will never guarantee returning
> results in a set order unless you tell it to by specifying an order by
> clause.
>
>
> To speed up your query, make sure you have indexes on:
>
> TourScene(TSCNtourId, TSCN_MEDIAid)
> ProductBin(PBINid, PBIN_HALOid)
> home(id, listdate)
> realtor(realtorid)
>
> If you can't get it fast, then post the EXPLAIN output.
>
--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php