Allowing multiple, simultaneous, non-blocking queries.
Allowing multiple, simultaneous, non-blocking queries.
am 26.03.2010 12:45:11 von Richard Quadling
Hi.
As I understand things, one of the main issues in the "When will PHP
grow up" thread was the ability to issue multiple queries in parallel
via some sort of threading mechanism.
Due to the complete overhaul required of the core and extensions to
support userland threading, the general consensus was a big fat "No!".
As I understand things, it is possible, in userland, to use multiple,
non-blocking sockets for file I/O (something I don't seem to be able
to achieve on Windows http://bugs.php.net/bug.php?id=47918).
Can this process be "leveraged" to allow for non-blocking queries?
Being able to throw out multiple non-blocking queries would allow for
the "queries in parallel" issue.
My understanding is that at the base level, all queries are running on
a socket in some way, so isn't this facility nearly already there in
some way?
Regards,
Richard.
--
-----
Richard Quadling
"Standing on the shoulders of some very clever giants!"
EE : http://www.experts-exchange.com/M_248814.html
EE4Free : http://www.experts-exchange.com/becomeAnExpert.jsp
Zend Certified Engineer : http://zend.com/zce.php?c=ZEND002498&r=213474731
ZOPA : http://uk.zopa.com/member/RQuadling
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Allowing multiple, simultaneous, non-blocking queries.
am 26.03.2010 13:05:19 von Peter Lind
Hi Richard
At the end of discussion, the best bet for something that "approaches"
a threaded version of multiple queries would be something like:
1. open connection to database
2. issue query using asynchronous call (mysql and postgresql support
this, haven't checked the others)
3. pick up result when ready
To get the threaded-ness, just open a connection per query you want to
run asynchronous and pick it up when you're ready for it - i.e.
iterate over steps 1-2, then do step 3 when things are ready.
Regards
Peter
On 26 March 2010 12:45, Richard Quadling wrote:
> Hi.
>
> As I understand things, one of the main issues in the "When will PHP
> grow up" thread was the ability to issue multiple queries in parallel
> via some sort of threading mechanism.
>
> Due to the complete overhaul required of the core and extensions to
> support userland threading, the general consensus was a big fat "No!".
>
>
> As I understand things, it is possible, in userland, to use multiple,
> non-blocking sockets for file I/O (something I don't seem to be able
> to achieve on Windows http://bugs.php.net/bug.php?id=47918).
>
> Can this process be "leveraged" to allow for non-blocking queries?
>
> Being able to throw out multiple non-blocking queries would allow for
> the "queries in parallel" issue.
>
> My understanding is that at the base level, all queries are running on
> a socket in some way, so isn't this facility nearly already there in
> some way?
>
>
> Regards,
>
> Richard.
>
> --
> -----
> Richard Quadling
> "Standing on the shoulders of some very clever giants!"
> EE : http://www.experts-exchange.com/M_248814.html
> EE4Free : http://www.experts-exchange.com/becomeAnExpert.jsp
> Zend Certified Engineer : http://zend.com/zce.php?c=ZEND002498&r=213474731
> ZOPA : http://uk.zopa.com/member/RQuadling
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>
--
WWW: http://plphp.dk / http://plind.dk
LinkedIn: http://www.linkedin.com/in/plind
Flickr: http://www.flickr.com/photos/fake51
BeWelcome: Fake51
Couchsurfing: Fake51
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Allowing multiple, simultaneous, non-blocking queries.
am 28.03.2010 18:41:48 von Nathan Rixham
Richard Quadling wrote:
> Hi.
>
> As I understand things, one of the main issues in the "When will PHP
> grow up" thread was the ability to issue multiple queries in parallel
> via some sort of threading mechanism.
>
> Due to the complete overhaul required of the core and extensions to
> support userland threading, the general consensus was a big fat "No!".
>
>
> As I understand things, it is possible, in userland, to use multiple,
> non-blocking sockets for file I/O (something I don't seem to be able
> to achieve on Windows http://bugs.php.net/bug.php?id=47918).
>
> Can this process be "leveraged" to allow for non-blocking queries?
>
> Being able to throw out multiple non-blocking queries would allow for
> the "queries in parallel" issue.
>
> My understanding is that at the base level, all queries are running on
> a socket in some way, so isn't this facility nearly already there in
> some way?
Yes.
"Threading" is only realistically needed when you have to get data from
multiple sources; you may as well get it all in parallel rather than
sequentially to limit the amount of time your application / script is
sitting stale and not doing any processing.
In the CLI you can leverage forking to the process to cover this.
When working in the http layer / through a web server you can leverage
http itself by giving each query its own url and sending out every
request in a single http session; allowing the web server to do the
heavy lifting and multi-threading; then you get all responses back in
the order you requested.
In both environments you can use non-blocking sockets to do your
communications with other services and 3rd parties; whilst you can only
process the the returned data sequentially, at least all the foreign
services are doing their work at the same time. Which cuts down user
perceived runtime and the "real" time (since your own php code can
ultimately only run X fast).
A short example would be to consider using the non blocking mysql query
function against multiple connections; this way mysql is doing the heavy
lifting in parallel and you are processing results sequentially.
In all scenarios /all/ of the contributing aspects have to be considered
though; the number of open connections, how much extra weight that puts
on the server (having a knock on effect on other processes), what
happens when one of the "threads" fails and so forth.
Normally there are many different ways to handle the same problem
though; such as views at the rdbms level, publishing / caching output,
or considering if you are still in the right language - sometimes
factoring bits which require multi threading in to different languages
and services lends to a nicer solution.
And finally, more often than not, the same problem can be addressed by
taking the final output, then working out how to produce it in reverse;
many queries can be turned in to one, data can be normalised higher up
the chain, sorting can occur in php rather than in the rdbms and many
more solutions. Always many ways to skin the cat :)
Regards!
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Allowing multiple, simultaneous, non-blocking queries.
am 28.03.2010 19:40:28 von Per Jessen
Richard Quadling wrote:
> Hi.
>=20
> As I understand things, one of the main issues in the "When will PHP
> grow up" thread was the ability to issue multiple queries in parallel=
> via some sort of threading mechanism.
>=20
> Due to the complete overhaul required of the core and extensions to
> support userland threading, the general consensus was a big fat "No!"=
..
Maybe a "Thanks, but no thanks".
> As I understand things, it is possible, in userland, to use multiple,=
> non-blocking sockets for file I/O (something I don't seem to be able
> to achieve on Windows http://bugs.php.net/bug.php?id=3D47918).
>=20
> Can this process be "leveraged" to allow for non-blocking queries?
>=20
> Being able to throw out multiple non-blocking queries would allow for=
> the "queries in parallel" issue.
>=20
> My understanding is that at the base level, all queries are running o=
n
> a socket in some way, so isn't this facility nearly already there in
> some way?
AFAICT (i.e. without having tried it), myqlnd enables you to do
asynchronous queries on mysql - the docuementation is a little lacking.=
=20
Personally speaking, that would be my first avenue of attack.=20
--=20
Per Jessen, Zürich (8.2°C)
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Re: Allowing multiple, simultaneous, non-blocking queries.
am 28.03.2010 20:13:52 von Adam Richardson
--00148530a344e0a9860482e05bc2
Content-Type: text/plain; charset=ISO-8859-1
>
> "Threading" is only realistically needed when you have to get data from
multiple sources; you may as well get it all in parallel rather than
sequentially to limit the amount of time your application / script is
sitting stale and not doing any processing.
> In the CLI you can leverage forking to the process to cover this.
> When working in the http layer / through a web server you can leverage
http itself by giving each query its own url and sending out every
request in a single http session; allowing the web server to do the
heavy lifting and multi-threading; then you get all responses back in
the order you requested.
Regarding leveraging http to achieve multi-threading-like capabilities, I've
tried this using my own framework (each individual dynamic region of a page
is automatically available as REST-ful call to the same page to facilitate
ajax capabilities, and I tried using curl to parallel process each of the
regions to see if the pseudo threading would by an advantage.)
In my tests, the overhead of the additional http requests killed any
advantage that might have been gained by generating the dynamic regions in a
parallel fashion. Do you know of any examples where this actually improved
performance? If so, I'd like to see them so I could experiment more with
the ideas.
Thanks,
Adam
--
Nephtali: PHP web framework that functions beautifully
http://nephtaliproject.com
--00148530a344e0a9860482e05bc2--
Re: Re: Allowing multiple, simultaneous, non-blocking queries.
am 28.03.2010 20:45:11 von Nathan Rixham
Adam Richardson wrote:
>> "Threading" is only realistically needed when you have to get data from
>
> multiple sources; you may as well get it all in parallel rather than
>
> sequentially to limit the amount of time your application / script is
>
> sitting stale and not doing any processing.
>
>
>> In the CLI you can leverage forking to the process to cover this.
>
>
>> When working in the http layer / through a web server you can leverage
>
> http itself by giving each query its own url and sending out every
>
> request in a single http session; allowing the web server to do the
>
> heavy lifting and multi-threading; then you get all responses back in
>
> the order you requested.
>
>
> Regarding leveraging http to achieve multi-threading-like capabilities, I've
> tried this using my own framework (each individual dynamic region of a page
> is automatically available as REST-ful call to the same page to facilitate
> ajax capabilities, and I tried using curl to parallel process each of the
> regions to see if the pseudo threading would by an advantage.)
>
> In my tests, the overhead of the additional http requests killed any
> advantage that might have been gained by generating the dynamic regions in a
> parallel fashion. Do you know of any examples where this actually improved
> performance? If so, I'd like to see them so I could experiment more with
> the ideas.
Hi Adam,
Good question, and you picked up on something I negated to mention.
With HTTP/1.1 came a little used addition which allows you to send
multiple requests through a single connection - which means you can load
up multiple requests and receive all the responses in sequence through
in a single "call".
Thus rather than the usual chain of:
open connection
send request
receive response
close connection
repeat
you can actually do:
open connection
send requests 1-10
receive responses 1-10
close connection
The caveat is one connection per server; but it's also interesting to
note that due to the "host" header you can call different "sites" on the
same physical machine.
I do have "an old class" which covers this; and I'll send you it
off-list so you can have a play.
In the context of this; it is also well worth noting some additional
bonuses.
By factoring each data providing source (which could even be a single
sql query) in to scripts of their own, with their own URIs - it allows
you to implement static caching of results via the web server on a case
by case basis.
A simple example I often used to use would be as follows:
uri: http://example.com/get-comments?post=123
source:
// if the static comments query results cache needs updated
// or doesn't exist then generate it
if(
file_exists('/query-results/update-comments-123')
|| !file_exists('/query-results/comments-123')
) {
if( $results = $db->query($query) ) {
// only save the results if they are good
file_put_contents(
'/query-results/comments-123',
json_encode($results)
);
}
}
echo file_get_contents('/query-results/comments-123');
exit();
?>
I say "used to" because I've since adopted a more restful & lighter way
of doing things;
uri: http://example.com/article/123/comments
and my webserver simply returns the static file using os file cache and
its own cache to keep it nice and speedy.
On the generation side; everytime a comment is posted the script which
saves the comment simply regenerates the said file containing the static
query results.
For anybody wondering why.. I'll let ab to the talking:
Server Software: Apache/2.2
Server Hostname: 10.12.153.70
Server Port: 80
Document Path: /users/NMR/70583/forum_post
Document Length: 10828 bytes
Concurrency Level: 250
Time taken for tests: 1.432020 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 110924352 bytes
HTML transferred: 108323312 bytes
Requests per second: 6983.14 [#/sec] (mean)
Time per request: 35.800 [ms] (mean)
Time per request: 0.143 [ms] (mean, across all concurrent requests)
Transfer rate: 75644.20 [Kbytes/sec] received
Yes, that's 6983 requests per second completed on a bog standard lamp
box; one dual-core and 2gb ram.
reason enough?
Regards!
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Re: Allowing multiple, simultaneous, non-blocking queries.
am 29.03.2010 03:08:29 von Phpster
On Mar 28, 2010, at 2:45 PM, Nathan Rixham wrote:
> Adam Richardson wrote:
>>> "Threading" is only realistically needed when you have to get data
>>> from
>>
>> multiple sources; you may as well get it all in parallel rather than
>>
>> sequentially to limit the amount of time your application / script is
>>
>> sitting stale and not doing any processing.
>>
>>
>>> In the CLI you can leverage forking to the process to cover this.
>>
>>
>>> When working in the http layer / through a web server you can
>>> leverage
>>
>> http itself by giving each query its own url and sending out every
>>
>> request in a single http session; allowing the web server to do the
>>
>> heavy lifting and multi-threading; then you get all responses back in
>>
>> the order you requested.
>>
>>
>> Regarding leveraging http to achieve multi-threading-like
>> capabilities, I've
>> tried this using my own framework (each individual dynamic region
>> of a page
>> is automatically available as REST-ful call to the same page to
>> facilitate
>> ajax capabilities, and I tried using curl to parallel process each
>> of the
>> regions to see if the pseudo threading would by an advantage.)
>>
>> In my tests, the overhead of the additional http requests killed any
>> advantage that might have been gained by generating the dynamic
>> regions in a
>> parallel fashion. Do you know of any examples where this actually
>> improved
>> performance? If so, I'd like to see them so I could experiment
>> more with
>> the ideas.
>
> Hi Adam,
>
> Good question, and you picked up on something I negated to mention.
>
> With HTTP/1.1 came a little used addition which allows you to send
> multiple requests through a single connection - which means you can
> load
> up multiple requests and receive all the responses in sequence through
> in a single "call".
>
> Thus rather than the usual chain of:
>
> open connection
> send request
> receive response
> close connection
> repeat
>
> you can actually do:
>
> open connection
> send requests 1-10
> receive responses 1-10
> close connection
>
> The caveat is one connection per server; but it's also interesting to
> note that due to the "host" header you can call different "sites" on
> the
> same physical machine.
>
> I do have "an old class" which covers this; and I'll send you it
> off-list so you can have a play.
>
> In the context of this; it is also well worth noting some additional
> bonuses.
>
> By factoring each data providing source (which could even be a single
> sql query) in to scripts of their own, with their own URIs - it allows
> you to implement static caching of results via the web server on a
> case
> by case basis.
>
> A simple example I often used to use would be as follows:
>
> uri: http://example.com/get-comments?post=123
> source:
>
> // if the static comments query results cache needs updated
> // or doesn't exist then generate it
> if(
> file_exists('/query-results/update-comments-123')
> || !file_exists('/query-results/comments-123')
> ) {
> if( $results = $db->query($query) ) {
> // only save the results if they are good
> file_put_contents(
> '/query-results/comments-123',
> json_encode($results)
> );
> }
> }
> echo file_get_contents('/query-results/comments-123');
> exit();
> ?>
>
> I say "used to" because I've since adopted a more restful & lighter
> way
> of doing things;
>
> uri: http://example.com/article/123/comments
> and my webserver simply returns the static file using os file cache
> and
> its own cache to keep it nice and speedy.
>
> On the generation side; everytime a comment is posted the script which
> saves the comment simply regenerates the said file containing the
> static
> query results.
>
> For anybody wondering why.. I'll let ab to the talking:
>
> Server Software: Apache/2.2
> Server Hostname: 10.12.153.70
> Server Port: 80
>
> Document Path: /users/NMR/70583/forum_post
> Document Length: 10828 bytes
>
> Concurrency Level: 250
> Time taken for tests: 1.432020 seconds
> Complete requests: 10000
> Failed requests: 0
> Write errors: 0
> Total transferred: 110924352 bytes
> HTML transferred: 108323312 bytes
> Requests per second: 6983.14 [#/sec] (mean)
> Time per request: 35.800 [ms] (mean)
> Time per request: 0.143 [ms] (mean, across all concurrent
> requests)
> Transfer rate: 75644.20 [Kbytes/sec] received
>
>
> Yes, that's 6983 requests per second completed on a bog standard lamp
> box; one dual-core and 2gb ram.
>
> reason enough?
>
> Regards!
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
I am interested in how you are handling security in this process. How
are you managing sessions with the restful interface? This is the one
thing that really interests me with the whole restful approach.
Bastien
Sent from my iPod
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Re: Allowing multiple, simultaneous, non-blocking queries.
am 29.03.2010 03:35:22 von Nathan Rixham
Phpster wrote:
> I am interested in how you are handling security in this process. How
> are you managing sessions with the restful interface? This is the one
> thing that really interests me with the whole restful approach.
one doesn't do sessions with rest :)
http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch _style.htm
95% of the time the uri's don't need any security or "session" type
stuff as it's all public data (think about it, if it's on a page, it's
naturally public)
with regards security; personally I use client side ssl certificates and
call through https (and further foaf+ssl) however any old
basic/digest/whatever authentication will do.
the major point of rest is to expose everything needed via GET on URIs,
(hypermedia as the engine of application state); URIs are not GETable at
a later date if they require session data. Hence why you pass or prompt
for any needed credentials; and further abstract the security in to the
transport layer (or tunnel, in the case of https).
regards!
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Re: Re: Allowing multiple, simultaneous, non-blocking queries.
am 29.03.2010 04:34:35 von Adam Richardson
--00c09f7d5b9892e25c0482e75a35
Content-Type: text/plain; charset=ISO-8859-1
Hi Nathan,
By factoring each data providing source (which could even be a single sql
> query) in to scripts of their own, with their own URIs - it allows you to
> implement static caching of results via the web server on a case by case
> basis.
My web framework automatically builds in REST-ful calls that return the
markup for a dynamic page region. For instance, on the homepage for my
framework (http://nephtaliproject.com), you can view the resulting markup
for the "Sites using Nephtali" by visiting the URL
http://nephtaliproject.com/index.php?nmode=htmlfrag&npipe=si tes
Same thing for the blog entries:
http://nephtaliproject.com/index.php?nmode=htmlfrag&npipe=an nouncements
This makes ajax work really easy, as you can see in the code generator pages
I've built for the framework (http://nephtaliproject.com/nedit/)
Additionally, this has a performance advantage over using javascript to
retrieve one particular section of a page, as my mechanism short-circuits
and only processes the dynamic region requested.
I'd played with processing the dynamic regions in my pages in parallel (and
caching) in the past using a mechanism that didn't use the same connection
for the http requests in the manner you described. I utilized the built-in
REST-ful calls for each dynamic region, as having the REST calls built right
in makes this quite easy. But, alas, the performance wasn't that great.
So, yes, I'd like to see the class that implements multiple http calls with
one connection, if you'd be so kind ;) Maybe I'll fiddle with the parallel
processing again.
Thanks,
Adam
--
Nephtali: PHP web framework that functions beautifully
http://nephtaliproject.com
--00c09f7d5b9892e25c0482e75a35--