db2 - persistent connection
db2 - persistent connection
am 02.09.2006 08:21:56 von turkeydelight
------=_Part_74394_18858237.1157178116061
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
I have several ( 100 - 200 ) small exits c oming from an app that update db2
tables. This works fine but the dba's don't like that many open connection
overheads. This isn't cgi/apache. These are all seperate terminating
processes. From what I gather there is no way to pass the dbh between
processes? Is some sort of middle daemon ( I guess non perl ) that keeps one
persistent connection to db2 and then have perl feed the daemon the only
option? I would think others have run into this wall before?
Thanks!
--
--------------------------------------------------------
Why is this technology an anathema to me?
------=_Part_74394_18858237.1157178116061--
Re: db2 - persistent connection
am 05.09.2006 06:06:00 von dmcbride
On Saturday 02 September 2006 00:21, Jack Faley ( The Tao of Jack ) wrote:
> I have several ( 100 - 200 ) small exits c oming from an app that update
> db2 tables. This works fine but the dba's don't like that many open
> connection overheads. This isn't cgi/apache. These are all seperate
> terminating processes. From what I gather there is no way to pass the dbh
> between processes?
No.
> Is some sort of middle daemon ( I guess non perl ) that
> keeps one persistent connection to db2 and then have perl feed the daemon
> the only option?
That would be called ... "DB2" ;-)
> I would think others have run into this wall before?
Option 1: stop during this in a bunch of small exits - put them in a single
exit. Probably not a real option.
Option 2: can you put this in stored procedures instead?
Option 3: DBD::Proxy may help here. Or at least the concept - set up a
POE-based pseudo-server which receives messages from apps, and funnels them
via a single connection to your server, then passes the results back. That
sounds like not only a lot of work to write, but also a lot of work for the
computer. Oh, and all those connections just moved from going directly to
the server to directly to the proxy/concentrator - I'm not really seeing a
savings there.
To be honest, I suspect that any option (other than a complete re-architecture
of how you approach the business problem you're dealing with in these exits)
will actually be a larger load on the system than what you're currently
working with.
If option 1 works (which I doubt from what little info was in your original
question), I think it's probably the only solution that would satisfy your
DBAs. But then again, I'm not seeing their problem, nor really what is
causing it, so I'm just taking a wild guess ;-)
Re: db2 - persistent connection
am 05.09.2006 06:28:31 von turkeydelight
------=_Part_107008_5824476.1157430511690
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
On 9/4/06, Darin McBride wrote:
>
> On Saturday 02 September 2006 00:21, Jack Faley ( The Tao of Jack ) wrote:
> > I have several ( 100 - 200 ) small exits c oming from an app that update
> > db2 tables. This works fine but the dba's don't like that many open
> > connection overheads. This isn't cgi/apache. These are all seperate
> > terminating processes. From what I gather there is no way to pass the
> dbh
> > between processes?
>
> No.
>
> > Is some sort of middle daemon ( I guess non perl ) that
> > keeps one persistent connection to db2 and then have perl feed the
> daemon
> > the only option?
>
> That would be called ... "DB2" ;-)
>
> > I would think others have run into this wall before?
>
> Option 1: stop during this in a bunch of small exits - put them in a
> single
> exit. Probably not a real option.
>
> Option 2: can you put this in stored procedures instead?
>
> Option 3: DBD::Proxy may help here. Or at least the concept - set up a
> POE-based pseudo-server which receives messages from apps, and funnels
> them
> via a single connection to your server, then passes the results
> back. That
> sounds like not only a lot of work to write, but also a lot of work for
> the
> computer. Oh, and all those connections just moved from going directly to
> the server to directly to the proxy/concentrator - I'm not really seeing a
> savings there.
>
> To be honest, I suspect that any option (other than a complete
> re-architecture
> of how you approach the business problem you're dealing with in these
> exits)
> will actually be a larger load on the system than what you're currently
> working with.
>
> If option 1 works (which I doubt from what little info was in your
> original
> question), I think it's probably the only solution that would satisfy your
> DBAs. But then again, I'm not seeing their problem, nor really what is
> causing it, so I'm just taking a wild guess ;-)
>
Thank you for your reply. If their was any way to mitigate the small exits I
would do it but I'm modifying an app that was never meant to be extensible.
So, you are correct, Im moving the load to a daemon in the middle with a
persistant connection rather than directly talking to DB2 :-) . I havent
benchmarked it. Im no dba but I have a feeling this will actually be slower
overall.
I am holding hope DBD:Proxy connect_cached might do the trick but the docs
seem to indicate its not fully completed. No harm in trying though. Even if
it doesnt, I'll check the dba's quality to see if they can tell if there are
still all those connections after a "middleware" to pool them that actually
doesn't do anything.
How would Stored Procedures help? Just for the performance after connected?
Thanks!
--
--------------------------------------------------------
Why is this technology an anathema to me?
------=_Part_107008_5824476.1157430511690--
Re: db2 - persistent connection
am 05.09.2006 16:08:54 von dmcbride
On Monday 04 September 2006 22:28, Jack Faley ( The Tao of Jack ) wrote:
> Thank you for your reply. If their was any way to mitigate the small exits
> I would do it but I'm modifying an app that was never meant to be
> extensible. So, you are correct, Im moving the load to a daemon in the
> middle with a persistant connection rather than directly talking to DB2 :-)
> . I havent benchmarked it. Im no dba but I have a feeling this will
> actually be slower overall.
I don't see it being a DBA question - it's a machine resource question. If
you have n apps connecting to DB2, you need n sockets. But if you have n
apps connecting to a proxy which itself connects to DB2, you have n+1
sockets. If that proxy is on another machine, then you may save some
resource. But if the proxy is on the same machine as the server, there's no
way this will be as fast as direct connection, though it may not be
noticeably slower, either, depending on the hardware. e.g., a single CPU on
a fully loaded system will be much slower, but an 8-way system that is only
90% loaded may not be noticed.
> I am holding hope DBD:Proxy connect_cached might do the trick but the docs
> seem to indicate its not fully completed. No harm in trying though. Even if
> it doesnt, I'll check the dba's quality to see if they can tell if there
> are still all those connections after a "middleware" to pool them that
> actually doesn't do anything.
Each client will need to make a direct, individual, non-cacheable connection
to the proxy. The proxy can make a cached connection to the server (as long
as the previous connection isn't timed out, this will be pretty much a
no-op). Note also that in this scenario, all exits must use the same
authentication - if you're trying to get some stuff done as sysadm, and other
stuff done as someone unprivileged, you'll need a different proxy for each
user.
> How would Stored Procedures help? Just for the performance after connected?
Stored procedures start as pre-connected from the server. Not that DB2 allows
Perl-based stored procedures, so you'd also have to rewrite them. But I'm
not even sure that the logic you're doing will fit as a stored procedure - I
brought it up more as a poke to make you think about whether this stuff is
better as a stored procedure or not.
Re: db2 - persistent connection
am 06.09.2006 03:00:28 von turkeydelight
------=_Part_126982_6769427.1157504428045
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
On 9/5/06, Darin McBride wrote:
>
> On Tuesday 05 September 2006 10:14, you wrote:
> > On 9/5/06, Darin McBride wrote:
> > > On Monday 04 September 2006 22:28, Jack Faley ( The Tao of Jack )
> wrote:
> > > I don't see it being a DBA question - it's a machine resource
> > > question. If
> > > you have n apps connecting to DB2, you need n sockets. But if you
> have n
> > > apps connecting to a proxy which itself connects to DB2, you have n+1
> > > sockets. If that proxy is on another machine, then you may save some
> > > resource. But if the proxy is on the same machine as the server,
> there's
> > > no
> > > way this will be as fast as direct connection, though it may not be
> > > noticeably slower, either, depending on the hardware. e.g., a single
> CPU
> > > on
> > > a fully loaded system will be much slower, but an 8-way system that is
> > > only
> > > 90% loaded may not be noticed.
> >
> > The dba's are PURELY interested in the number of open connections done
> as
> > they generate DB2 overhead. Im no dba but they tell me all sorts of DB2
> > files and things go on evrytime DB2 opens a new connection. I don't see
> it
> > as being a problem at all honestly since these will occur periodically
> but
> > I must defer to the dba's..
>
> About the only thing I can think of that you're really saving is the
> authentication phase - and that sounds really dangerous to me. It's like
> opening a security hole.
Yes, as I indicated before, Im going to find out just how much overhead is
involved in many short open connection / disconnects versus funneling
through the proxy.
Ive been browsing modules because the proxy module documentation makes me
slightly concerned some portions of cached_connection aren't full
implemented ( not DBI ). I ran across Pixie and Tangram. Are you familiar or
have an opinion on these?
> > Each client will need to make a direct, individual, non-cacheable
> > > connection
> > > to the proxy. The proxy can make a cached connection to the server
> (as
> > > long
> > > as the previous connection isn't timed out, this will be pretty much a
> > > no-op). Note also that in this scenario, all exits must use the same
> > > authentication - if you're trying to get some stuff done as sysadm,
> and
> > > other
> > > stuff done as someone unprivileged, you'll need a different proxy for
> > > each user.
> >
> > All the connections will use the same user so this will not be a
> problem.
>
> Unless someone hacks a way to get in to the proxy - you'll need
> authentication
> from the proxy as well. And will it be encrypted?
>
> > > How would Stored Procedures help? Just for the performance after
> > > connected?
> > >
> > > Stored procedures start as pre-connected from the server. Not that
> DB2
> > > allows
> > > Perl-based stored procedures, so you'd also have to rewrite them. But
> > > I'm not even sure that the logic you're doing will fit as a stored
> > > procedure - I
> > > brought it up more as a poke to make you think about whether this
> stuff
> > > is better as a stored procedure or not.
> >
> > Thanks on this. The stored procedures won't really be a benefit at this
> > point to me and again the dba's just don't want a lot of "open
> connection
> > overhead". Im going to look into this after I get finished just to see
> how
> > much REALLY is being saved.
> >
> > Your statement above seems defnitive about the proxy making a cacheable
> > connection which brightened my morning.
>
> Yes - because the proxy can sit there as a concentrator - it lives in its
> own
> process space, and stays running even when nothing is using it.
> (Technically, you could design it to die after 5 minutes or something of
> non-use, and then have the interface module auto-start it when it's
> needed,
> to reduce resource requirements when not active, but I would personally
> try
> to avoid that if possible - way too much work for way too little return.)
> Because it's a single long-lived process, it can use a cached
> connection. As
> you've no doubt learned, connections cannot be passed from process to
> process - so this is only of use when you're still in the same process.
>
> In fact, it seems that DBD::Proxy explicitly gives this as a potential use
> for
> itself in CGI environment (lots of short-lived processes for CGI, not so
> much
> for mod_perl).
>
Ugh, the proxy authentication prior to DB2 authentication presents another
thing to pursue. There's a lot to be said for embedded databases......
Thanks for all your information!
(Sorry I just realized a few messages were to you not the list!)
------=_Part_126982_6769427.1157504428045--
Re: db2 - persistent connection
am 06.09.2006 12:43:44 von hjp
--N1GIdlSm9i+YlY4t
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On 2006-09-05 08:08:54 -0600, Darin McBride wrote:
> On Monday 04 September 2006 22:28, Jack Faley ( The Tao of Jack ) wrote:
> > Thank you for your reply. If their was any way to mitigate the small ex=
its
> > I would do it but I'm modifying an app that was never meant to be
> > extensible. So, you are correct, Im moving the load to a daemon in the
> > middle with a persistant connection rather than directly talking to DB2=
:-)
> > . I havent benchmarked it. Im no dba but I have a feeling this will
> > actually be slower overall.
>=20
> I don't see it being a DBA question - it's a machine resource question. =
If=20
> you have n apps connecting to DB2, you need n sockets. But if you have n=
=20
> apps connecting to a proxy which itself connects to DB2, you have n+1=20
> sockets.
>=20
If I understood the OP correctly, his DBA isn't concerned about
simultaneously open DB connections, but about many short-lived
connections. If you have a program (for example a CGI script), which
makes only a single query and then exits, for 1000 invokations of the
program you have=20
1000 x opening a DB connection, including authentication, creating initial
transaction, etc.
1000 x select
1000 x closing the DB connection.
If there is a middleware server which maintains a pool of DB
connections, that changes to:
n x opening a connection, including authentication, creating initial
transaction, etc.
1000 x opening a connection to the middleware server
1000 x select
1000 x closing the connection to the middleware server
n x closing the connection.
where n is the number of connections that need to be opened, which tends
toward 0 as the connection pool reaches a steady state.
Whether this actually reduces the load depends on the relative cost of
DB connection overhead, the query and the middleware server overhead.
For example, on one of our Oracle servers, a connect takes 27ms, a
"select sysdate from dual" 1.5ms, and a disconnect 0.8ms. I don't know
how much of these 27.8ms for connect/disconnect is actually time spent
by the server, but it is an upper bound on what we can save: So the
question is: Is saving *at most* 27.8ms per invocation of the the
program worth the complication? If your program is invoked a million
times per day, probably yes (even if its only 14ms and the middleware
server adds 5 ms of overhead, you still have saved 9000 seconds per day).
If it is invoked only 1000 times a day, probably not.=20
> Note also that in this scenario, all exits must use the same=20
> authentication - if you're trying to get some stuff done as sysadm, and o=
ther=20
> stuff done as someone unprivileged, you'll need a different proxy for eac=
h=20
> user.
The authentication to the proxy or middleware server can be a lot
simpler. For example, you can use a unix socket and set the permissions
so that only authorized users can open it.
hp
--=20
_ | Peter J. Holzer | If I wanted to be "academically correct",
|_|_) | Sysadmin WSR | I'd be programming in Java.
| | | hjp@wsr.ac.at | I don't, and I'm not.
__/ | http://www.hjp.at/ | -- Jesse Erlbaum on dbi-users
--N1GIdlSm9i+YlY4t
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
iQDQAwUBRP6mYFLjemazOuKpAQLq+gXUC2NN4A6QcofsyuJpTfNkXTRjIcNZ THMp
pDQVPRu33MnKI7Kwg2qj9UlNT/tjRxuI1zOqw0C/X0pDIExDy3UBYcJdS3Op BNPd
vkT0K1iaGo6f13y8CGa7p1jND2fVwBTNzLsDQ/eOGnAntCr4Xcs6/cSngtjf BWxI
kz3mnF/PveaGuYKK1NwjXRzjLaDDZfWlOaLLMGQ21VUUG3B2YATdm2RrPS9m MfWx
RgHz1oyqqPvHKpmPFaOBq1OB/Q==
=u1Os
-----END PGP SIGNATURE-----
--N1GIdlSm9i+YlY4t--