preserving request body across redirects

preserving request body across redirects

am 27.12.2008 23:55:42 von Mark Hedges

Hi, I'm trying to figure out how to preserve the request
body across an OpenID login redirect cycle.

This handler is running as a PerlHeaderParserHandler.

Say the application's session times out, but the user posts
something by clicking submit.

They are redirected to the OpenID server but it says they
are still logged in and returns a positive response.

The user's experience should be seamless, it should go back
to the uri they requested first and post whatever they
posted.

Before they redirect to the OpenID server, I can grab their
post params or even $r->read() the raw request body and save
that in their session. (Potential session problems with
file uploads, I know.)

It's clear how to restore the original GET args when they
come back, $r->args($saved_args).

But when they come back from the OpenID server, how do I put
the saved request body or post params into the new request?

If I can't do this for the raw request body (i.e. a
non-param xml PUT request?) and can only do it by saving
POST params, do I have to create a new
APR::Request::Param::Table object? (with the current
request's pool?) If so how do I put it into the current
request for further handlers to access?

Is this possible somehow?

Thanks.

Mark

Re: preserving request body across redirects

am 28.12.2008 09:08:37 von Mark Hedges

On Sat, 27 Dec 2008, Mark Hedges wrote:
> Hi, I'm trying to figure out how to preserve the request
> body across an OpenID login redirect cycle.
> ...
> But when they come back from the OpenID server, how do I put
> the saved request body or post params into the new request?

Aha, I didn't quite get that request input filters run after
all the pre-response handlers to filter anything read by the
PerlResponseHandler. So I assume my PerlHeaderParserHandler
can install a PerlInputFilterHandler which will replace the
request bucket brigades with new ones that contain the
contents of the preserved request body.

Does it matter that my pre-response handler created an
instance of Apache2::Request to deal with the OpenID params?

I will try this. THanks for tips etc. if you have time.

On another note, is there any way to do a proper Authen
handler with an in-line login form, instead of popping up a
klunky browser authentication prompt?

Mark

Re: preserving request body across redirects

am 28.12.2008 09:27:51 von Fred Moyer

On Sun, Dec 28, 2008 at 12:08 AM, Mark Hedges wrote:
>
> On Sat, 27 Dec 2008, Mark Hedges wrote:
>> Hi, I'm trying to figure out how to preserve the request
>> body across an OpenID login redirect cycle.
>> ...
>> But when they come back from the OpenID server, how do I put
>> the saved request body or post params into the new request?

To persist data across http requests you either need to use cookies or
stash the data in $r->connection->pnotes (less reliable, only use if
you understand your application connection handling details).

> On another note, is there any way to do a proper Authen
> handler with an in-line login form, instead of popping up a
> klunky browser authentication prompt?

I always use Apache2::AuthCookie (or my own subclass). Authentication
really comes down to code that sets $r->user; there are various layers
such as cookies to bridge the gaps but the Apache directives such as
Require-User are referring to $r->user. And look at the
PerlAuthHandler, PerlFixupHandler, and PerlAuthzHandler phases.

Is this part of Apache2::Controller? An Apache2 based framework with
OpenID support sounds appealing.

Re: preserving request body across redirects

am 28.12.2008 14:46:22 von aw

Fred Moyer wrote:
> On Sun, Dec 28, 2008 at 12:08 AM, Mark Hedges wrote:
>> On Sat, 27 Dec 2008, Mark Hedges wrote:
>>> Hi, I'm trying to figure out how to preserve the request
>>> body across an OpenID login redirect cycle.
>>> ...
>>> But when they come back from the OpenID server, how do I put
>>> the saved request body or post params into the new request?
>
In Apache2::AuthCookie, the author uses a trick : convert your POST to a
GET see the sub convert_to_get().
In fact, this is a bit of a misnomer, because what is really happening
is a conversion from having the arguments (parameters?) in the body of
the request, to having them in the saved URL. (enctype = url-encoded
instead of multipart/form-data).
This method will work, up to a certain request parameter size I guess
(probably not for a file upload POST, and also not if the parameters
need any specific character-encoding).

I don't think that using $r->connection->pnotes is a very safe bet in
this case, because with Keep-Alive you never know when the maximum
number of requests for this connection will be be exhausted, or do you ?

Regarding the avoidance of the browser's own login popup dialog :
This dialog pops up when the server sends back a 401 response
(unauthorised), with a header indicating that it needs a Basic or Digest
authentication.
Thus, if you arrange for your authentication method to send back a login
page instead (with a 200 OK code), the browser will show your login
form, and not its own dialog.

To save in the meantime the original request, there are several
possibilities :
1) as a hidden input field in the login form that you send back. When
the user fills in his id and password, and submits the form back to the
server, the hidden parameter will be sent back also, and can be used to
do a redirect in case of a succesful authentication.


value="/protected/what_he_first_wanted.html">

....

2) as a pnote to the connection, with the same caveat as before.
3) as the content of a cookie, set at the same time as sending the login
form (probably the easiest way).
4) server-side, in some kind of session-linked structure. The point
there is to know whether before the user is authenticated, you have/have
not yet created a session structure.

Re: preserving request body across redirects

am 28.12.2008 23:01:35 von Mark Hedges

This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.

--373334887-750343440-1230501695=:24289
Content-Type: TEXT/PLAIN; charset=ISO-8859-1
Content-Transfer-Encoding: QUOTED-PRINTABLE



On Sun, 28 Dec 2008, Andr=E9 Warnier wrote:
> > > > But when they come back from the OpenID server, how
> > > > do I put the saved request body or post params into
> > > > the new request?
> >
> In Apache2::AuthCookie, the author uses a trick : convert
> your POST to a GET see the sub convert_to_get().

No, that's not what I want. What if it's a PUT, or
something else? I want it to replace the body of the
authenticated request with the body of the request that was
saved from when they clicked but were not authenticated.

> I don't think that using $r->connection->pnotes is a very
> safe bet in this case, because with Keep-Alive you never
> know when the maximum number of requests for this
> connection will be be exhausted, or do you ?

Yeah, I don't want people to have to depend on Keep-Alive,
especially if it is running on a cluster or behind a load
balancer or something.

> Regarding the avoidance of the browser's own login popup
> dialog : This dialog pops up when the server sends back a
> 401 response (unauthorised), with a header indicating that
> it needs a Basic or Digest authentication. Thus, if you
> arrange for your authentication method to send back a
> login page instead (with a 200 OK code), the browser will
> show your login form, and not its own dialog.

It didn't seem to work correctly when I ran it as a
PerlAuthenHandler. I'll try that again though.

> To save in the meantime the original request, there are
> several possibilities

Yeah, the thing is that I want the mechanism to be agnostic
to the type of request. It doesn't have to depend on any
parseable parameters from a POST. I want it to replicate
the request body no matter what, for example, if the raw
request body is an XML entity being set with a PUT request,
the client should redirect through the auth server and
should not have to re-send the original request. "Magic."

So it's looking like this:

- user logs in, does stuff, goes to get coffee
- user comes back, clicks 'submit' on whatever form
- session is idle:
- reads and stores raw request body from bb's
- saves the method number
- redirects to openid server
- openid server redirects to GET return url
- original method number restored
- installs input filter to replace bb's with
contents of preserved request body

The question is, since the handler doing the preservation
and installing the input filter also instantiates an
Apache2::Request object, when it gets to the response phase
controller, will the response handler's Apache2::Request
instance read the replaced data or the cached data?

Sessions under the framework are required for the
authentication handler to work at all, so I just go ahead an
attach one anyway, prior to authentication. I guess this
might not be a good idea or would expose to DoS? Hrmm. I
don't really want to factor that out. I'll think about it.

Maybe the request body preservation should only happen by
default if they already had a session authenticated but only
timed out.

Mark
--373334887-750343440-1230501695=:24289--

Re: preserving request body across redirects

am 29.12.2008 10:42:44 von aw

Mark Hedges wrote:
>
[...]
Hi.

Not to discourage you, but to point out some additional aspects..

There is a Belgian proverb that says essentially "one should not try to
be more catholic than the pope".
What I mean by that is that there are many twists to web authentication,
and it is really hard to try and cover all of them at once. Your
intentions are laudable, but you may end up having to chew more than you
intend to. There are some good reasons why the myriad mod_perl
authentication modules only each cover part of the AAA scene.

The underlying truth is that HTTP was designed as a "fire and forget"
protocol, where each request/response cycle is unique and essentially
disconnected from any previous or subsequent request/response.
This means that, whatever you do, you first find yourself fighting
against the fundamental limitations of the protocol. In a way, it is as
if everything you do would be a "patch" applied over an uncooperative
and slippery basic surface. Worse yet, you have to apply your own AAA
patch over the existing patches (such as Keep-Alive, chunked encoding,
NTLM authentication, Cookies, byte ranges, DAV, cgi-bin, proxying..),
and make sure it works with all of them.

For example, if you want to be really agnostic, then you have to start
by saving the whole request each time, no matter if the user is
authenticated/authorised or not, because you do not know this yet. It
is only when the request has started to be parsed, that you know which
of your or sections apply, and thus which
authentication/authorization applies.
There may be clever ways to do this, but I suspect that it might in any
case be very inefficient, and not really workable for a production
environment (the ScriptLog directive of Apache is an example).

You might want to wonder for instance if you cannot instead structure
your application so that, prior to accessing a protected URL, it is not
mandatory to do a first "authentication GET" to some document.

Another avenue may be to apply an old African proverb, which states that
to eat an elephant, one should do it bit by bit.
In other words, specialising your authentication subs by HTTP method,
and apply different schemes to HEAD, GET, POST, PUT, (MKCOL, OPTIONS..).

As an example of twist, imagine the following scenario : a user POSTs
(or PUTs) a very large file to the server. To my knowledge, Apache will
read the POST/PUT entirely and store it somewhere, prior to even calling
your authentication module. Then you read the POST/PUT, save it
somewhere again, and send them a login page to start your authentication
cycle.
Now imagine the user just does not know a valid login, and closes his
browser in despair. How do you clean up ?

As another example, consider the following :
The DAV module/protocol allows one to create "web folders" in one's
Windows Explorer, corresponding on the server side to directories, in
which one can drag-and-drop files directly from one's desktop.
Of course you might want to authenticate/authorise the user doing this.
The Windows Explorer "web folder" implementation however does not
support cookies set by the webserver, so it is hard to do anything based
on cookies, be it only a cookie holding a session-id. It also only
supports Basic (and maybe Digest) authentication (no custom login pages
thus), and it does not support OpenID.
The only way I found was to do Basic authentication, and store the
authentication (or session) data in the Connection structure (like
$r->connection->notes). In that case it fits, because Windows-style
authentication is connection-oriented anyway.

And as long as we are talking Windows (and since at least 90% of
corporations still use IE and Windows authentication internally), what
if one of your customers insists that to authenticate external users,
OpenID is OK, but for internal users they would really like you to use
the integrated Windows authentication (NTLM), since the user is already
authenticated in the Domain anyway.
NTLM authentication would play havoc with your scheme, because it
already itself relies on a multi-step exchange between browser and server.

Re: preserving request body across redirects

am 29.12.2008 15:16:38 von Adam Prime

Mark Hedges wrote:
>
> The question is, since the handler doing the preservation
> and installing the input filter also instantiates an
> Apache2::Request object, when it gets to the response phase
> controller, will the response handler's Apache2::Request
> instance read the replaced data or the cached data?
>

From what I understand, as soon as you try to use the body of a request
using Apache2::Request, it reads it all in. Which might play havoc with
the idea of using a Filter to modify a request / inject a body to replay
an old post request.

However, without that complication it may be possible. Have you looked
at the filter documentation on perl.apache.org? Particularly the
examples in this section:

http://perl.apache.org/docs/2.0/user/handlers/filters.html#I nput_Filters

My suggestion would be to try it and see what happens. Filters are one
aspect of mp2 that it seems like a lot of people haven't really used. I
personally have used OuputFilters for a bunch of things, but i haven't
used InputFilters at all. I'm also not familiar enough with how OpenID
works to really be able to comment on what you're trying to do.

Adam

Re: preserving request body across redirects

am 29.12.2008 19:59:26 von Mark Hedges

This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.

--373334887-1619388356-1230575990=:15040
Content-Type: TEXT/PLAIN; CHARSET=ISO-8859-15
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID:


On Mon, 29 Dec 2008, David Ihnen wrote:
> > Say the application's session times out, but the user
> > posts something by clicking submit.
> >
> > They are redirected to the OpenID server but it says
> > they are still logged in and returns a positive
> > response.
>
> But you can't know that they are authenticated without
> redirecting them, because you dropped the session data
> that held that information.
>
> I think the obvious answer here is to keep the session
> data around as long as the authentication is valid for.
> Or keep this piece of state in a cookie so that it doesn't
> vanish when your server decides to drop session state.

I'm not sure I understand what you mean here David. They
don't get redirected for every request to the server. The
session data keeps a timestamp. They stay logged in (and do
not need to redirect thru the OpenID server) as long as the
timestamp is current. The whole point is to keep the
session current with an abstract tracking mechanism (the
default mechanism in the framework is a session ID cookie.)

But if they time out, and click submit, but the independent
OpenID server says they're still logged in, or they re-auth
automatically with the server (e.g. VeriSign Seatbelt
Firefox plugin), then it should act as if they never timed
out, i.e. when they come back from the redirect cycle, it
should continue to submit whatever they clicked.

On Mon, 29 Dec 2008, Adam Prime wrote:
> From what I understand, as soon as you try to use the body
> of a request using Apache2::Request, it reads it all in.
> Which might play havoc with the idea of using a Filter to
> modify a request / inject a body to replay an old post
> request.

Ah - yes I may have to go to using only the methods of the
Apache2::RequestRec family to get at the 'openid_url' param.
Then an input filter could rewrite the buckets before the
Response phase controller instantiates the Apache2::Request.
I'll try it and see.

On Mon, 29 Dec 2008, Andr=E9 Warnier wrote:
> Not to discourage you, but to point out some additional
> aspects..

Thanks Andr=E9 these are very good points. I had considered a
size limit option where you could say that the request body
would not be preserved if it was over 2k or something. But
even that leaves the system open to DoS attacks unless the
application developer runs some very active cleanup script.

So I think the best plan is to preserve the request body
only if they already have an authenticated session and the
session timed out.

Good to know about other methods and not being able to use
cookies with Windows etc. But the session tracking
mechanism in Apache2::Controller is abstracted, so a
non-cookie mechanism could be used instead, like an output
filter that rewrites links with a session id query argument,
or something. Probably still would not work with DAV, but
it's the general idea.

> And as long as we are talking Windows (and since at least
> 90% of corporations still use IE and Windows
> authentication internally), what if one of your customers
> insists that to authenticate external users, OpenID is OK,
> but for internal users they would really like you to use
> the integrated Windows authentication (NTLM), since the
> user is already authenticated in the Domain anyway. NTLM
> authentication would play havoc with your scheme, because
> it already itself relies on a multi-step exchange between
> browser and server.

In this case I might create an internal OpenID server that
interfaced to our Windows NTLM. :-) Or, I'd stack an NTLM
auth handler above the OpenID handler, so if NTLM succeeds,
then the session timestamp is updated, and the OpenID
handler says OK without checking anything.

The point of Apache2::Controller is to abstract different
facets into stackable, subclassable object-oriented handlers
and then dispatch into a nice MVC-style controller
subroutine, but without all the overhead of something like
Catalyst. I dislike Catalyst's plugin architecture. Why do
I need a plugin layer above everything else on CPAN? I just
want to use something and do it and get it done. Why do I
need a model abstraction layer? I just want to use a
DBIx::Class schema or maybe sometimes I want to feed
straight out of DBI - the point is I don't want the
framework to get in your way.

Mark
--373334887-1619388356-1230575990=:15040--

Re: preserving request body across redirects

am 29.12.2008 20:34:25 von David Ihnen

This is a multi-part message in MIME format.
--------------010101020205060505030905
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 7bit

Mark Hedges wrote:
> On Mon, 29 Dec 2008, David Ihnen wrote:
>
>>> Say the application's session times out, but the user
>>> posts something by clicking submit.
>>>
>>> They are redirected to the OpenID server but it says
>>> they are still logged in and returns a positive
>>> response.
>>>
>> But you can't know that they are authenticated without
>> redirecting them, because you dropped the session data
>> that held that information.
>>
>> I think the obvious answer here is to keep the session
>> data around as long as the authentication is valid for.
>> Or keep this piece of state in a cookie so that it doesn't
>> vanish when your server decides to drop session state.
>>
>
> But if they time out, and click submit, but the independent
> OpenID server says they're still logged in, or they re-auth
> automatically with the server (e.g. VeriSign Seatbelt
> Firefox plugin), then it should act as if they never timed
> out, i.e. when they come back from the redirect cycle, it
> should continue to submit whatever they clicked.
>
Maybe you could use something a bit more client side like an iframe
target, which CAN be resubmitted because the form still exists in state
on the page despite whats going on in the iframe.

- submit form target iframe
- iframe redirects for auth
- iframe gets auth
- success page activates parent re-submit
- submit form target iframe
- iframe success activates main page success state

Though that is, of course, specific to the application being programmed,
utilizing client-side javascript active stuff rather than particular web
server programming to transparently handle it on the server side using
basic html2.0 type structure.

I have to agree with others that a whole proxying layer to allow it
seems... excessive.

timtowtdi I guess.

David


--------------010101020205060505030905
Content-Type: text/html; charset=ISO-8859-15
Content-Transfer-Encoding: 7bit




http-equiv="Content-Type">


Mark Hedges wrote:
cite="mid:alpine.DEB.1.10.0812291039480.15040@li16-163.membe rs.linode.com"
type="cite">

On Mon, 29 Dec 2008, David Ihnen wrote:



Say the application's session times out, but the user
posts something by clicking submit.

They are redirected to the OpenID server but it says
they are still logged in and returns a positive
response.


But you can't know that they are authenticated without
redirecting them, because you dropped the session data
that held that information.

I think the obvious answer here is to keep the session
data around as long as the authentication is valid for.
Or keep this piece of state in a cookie so that it doesn't
vanish when your server decides to drop session state.



But if they time out, and click submit, but the independent
OpenID server says they're still logged in, or they re-auth
automatically with the server (e.g. VeriSign Seatbelt
Firefox plugin), then it should act as if they never timed
out, i.e. when they come back from the redirect cycle, it
should continue to submit whatever they clicked.


Maybe you could use something a bit more client side like an iframe
target, which CAN be resubmitted because the form still exists in state
on the page despite whats going on in the iframe.



- submit form target iframe

- iframe redirects for auth

- iframe gets auth

- success page activates parent re-submit

- submit form target iframe

- iframe success activates main page success state



Though that is, of course, specific to the application being
programmed, utilizing client-side javascript active stuff rather than
particular web server programming to transparently handle it on the
server side using basic html2.0 type structure.



I have to agree with others that a whole proxying layer to allow it
seems... excessive.



timtowtdi I guess.



David






--------------010101020205060505030905--

Re: preserving request body across redirects

am 30.12.2008 01:40:04 von Mark Hedges

On Mon, 29 Dec 2008, David Ihnen wrote:
>
> Though that is, of course, specific to the application
> being programmed, utilizing client-side javascript active
> stuff rather than particular web server programming to
> transparently handle it on the server side using basic
> html2.0 type structure.
>
> I have to agree with others that a whole proxying layer to
> allow it seems... excessive.

I don't know who's recommending a proxying layer or who's
agreeing with you... are you aware of how OpenID works or
what it is?

Did you understand that the Apache2::Controller framework is
intended to be a general application framework, and should
not require you to implement pages in a certain way?

Mark

Re: preserving request body across redirects

am 30.12.2008 02:10:46 von David Ihnen

This is a multi-part message in MIME format.
--------------010804060500020306020600
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Mark Hedges wrote:
> On Mon, 29 Dec 2008, David Ihnen wrote:
>
>> Though that is, of course, specific to the application
>> being programmed, utilizing client-side javascript active
>> stuff rather than particular web server programming to
>> transparently handle it on the server side using basic
>> html2.0 type structure.
>>
>> I have to agree with others that a whole proxying layer to
>> allow it seems... excessive.
>>
>
> I don't know who's recommending a proxying layer or who's
> agreeing with you... are you aware of how OpenID works or
> what it is?
>
I think that any system that stores a post for later retrieval on a
separate request is, indeed, proxying that request for later use in a
different situation. Thus, to program your system to hold onto a form
put/post amounts to inserting a proxy-delay-for-authentication for that
action - even if semantically you want to say its not a proxy because
its not a separate application/codebase. I admit to introducing the
term proxy here without explaining why I consider the concept to be proxy.

Yes, I am aware of how OpenID works. And it works in-band unless the
application explicitly sidelines it - there is no inherent side-band
communication that the client and server will use - otherwise, you
wouldn't EVER do a main state redirect. The moment you have to redirect
to that openid server page, you have sidelined the entire stream of
browser-server communication - and as you have found in the problem
you're trying to solve - the state inherent therein, including the
content of the original request. Is the utilization of the stored form
data going to be through a different connection/request entirely after
authentication verification? Would require some tests to see if the
client behaves that way or not. I suspect its not defined to be one way
or the other, but I may be wrong.
> Did you understand that the Apache2::Controller framework is
> intended to be a general application framework, and should
> not require you to implement pages in a certain way?
>
Not explicitly, but implicitly I would expect that, yes.

Is the controller framework going to require me to depend my web system
operations on some sort of semi-persistent pan-server correlated session
state system? Would that not be requiring me to implement my web
application in a particular way? Okay, that may indeed be the role of a
framework though I'd no doubt chafe at the limitations myself. If I
have to write my web application a certain way, is it so unusual to have
my pages need to interact with that application a certain way? They're
almost inevitably closely coupled.

This is a fairly sticky issue - if you have run out of local
authentication token, its impolite to drop data they were submitting.
But on the other hand, there's no particularly good way of *not*
dropping it - you can't really handle information if they're not
authenticated for it. And out of pure defensive network traffic
handling, we do the absolute minimum for people who aren't authenticated
- so they can't consume our system resources, be that
posts-to-sessions-that-don't-exist or what. I can see programming the
client side of the web application to handle this kind of
token-loss-recovery automatically - the client has the form state and
being able to recover the session state is valuable, and entirely
independent from the framework utilized. But I'm not convinced that the
web server should be jumping through hoops/proxies to make it happen.
(not that you have to convince me, I'm trying to present a perspective
that may be novel and generally be helpful in improving your software,
and we may just disagree on the role of the software involved)

David




--------------010804060500020306020600
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit








Mark Hedges wrote:
cite="mid:alpine.DEB.1.10.0812291636580.26079@li16-163.membe rs.linode.com"
type="cite">

On Mon, 29 Dec 2008, David Ihnen wrote:


Though that is, of course, specific to the application
being programmed, utilizing client-side javascript active
stuff rather than particular web server programming to
transparently handle it on the server side using basic
html2.0 type structure.

I have to agree with others that a whole proxying layer to
allow it seems... excessive.



I don't know who's recommending a proxying layer or who's
agreeing with you... are you aware of how OpenID works or
what it is?


I think that any system that stores a post for later retrieval on a
separate request is, indeed, proxying that request for later use in a
different situation.  Thus, to program your system to hold onto a form
put/post amounts to inserting a proxy-delay-for-authentication for that
action - even if semantically you want to say its not a proxy because
its not a separate application/codebase.  I admit to introducing the
term proxy here without explaining why I consider the concept to be
proxy.



Yes, I am aware of how OpenID works.  And it works in-band unless the
application explicitly sidelines it - there is no inherent side-band
communication that the client and server will use - otherwise, you
wouldn't EVER do a main state redirect.  The moment you have to
redirect to that openid server page, you have sidelined the entire
stream of browser-server communication - and as you have found in the
problem you're trying to solve - the state inherent therein, including
the content of the original request.  Is the utilization of the stored
form data going to be through a different connection/request entirely
after authentication verification?  Would require some tests to see if
the client behaves that way or not.  I suspect its not defined to be
one way or the other, but I may be wrong.

cite="mid:alpine.DEB.1.10.0812291636580.26079@li16-163.membe rs.linode.com"
type="cite">
Did you understand that the Apache2::Controller framework is
intended to be a general application framework, and should
not require you to implement pages in a certain way?


Not explicitly, but implicitly I would expect that, yes. 



Is the controller framework going to require me to depend my web system
operations on some sort of semi-persistent pan-server correlated
session state system?  Would that not be requiring me to implement my
web application in a particular way?  Okay, that may indeed be the role
of a framework though I'd no doubt chafe at the limitations myself.  If
I have to write my web application a certain way, is it so unusual to
have my pages need to interact with that application a certain way? 
They're almost inevitably closely coupled.



This is a fairly sticky issue - if you have run out of local
authentication token, its impolite to drop data they were submitting. 
But on the other hand, there's no particularly good way of *not*
dropping it - you can't really handle information if they're not
authenticated for it.  And out of pure defensive network traffic
handling, we do the absolute minimum for people who aren't
authenticated - so they can't consume our system resources, be that
posts-to-sessions-that-don't-exist or what.  I can see programming the
client side of the web application to handle this kind of
token-loss-recovery automatically - the client has the form state and
being able to recover the session state is valuable, and entirely
independent from the framework utilized.  But I'm not convinced that
the web server should be jumping through hoops/proxies to make it
happen.  (not that you have to convince me, I'm trying to present a
perspective that may be novel and generally be helpful in improving
your software, and we may just disagree on the role of the software
involved)



David










--------------010804060500020306020600--

Re: preserving request body across redirects

am 30.12.2008 18:35:11 von Mark Hedges

This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.

--373334887-1230733242-1230658511=:6952
Content-Type: TEXT/PLAIN; charset=ISO-8859-1
Content-Transfer-Encoding: QUOTED-PRINTABLE


Thanks I really do appreciate your comments.

On Mon, 29 Dec 2008, David Ihnen wrote:
> Yes, I am aware of how OpenID works.=A0 And it works in-band
> unless the application explicitly sidelines it - there is
> no inherent side-band communication that the client and
> server will use - otherwise, you wouldn't EVER do a main
> state redirect.=A0

It does? That would be great. How? Why does the consumer
object return a check url? Why does it have the return_to
parameter? From Net::OpenID::Consumer:

# now your app has to send them at their identity server's endpoint
# to get redirected to either a positive assertion that they own
# that identity, or where they need to go to login/setup trust/etc.

my $check_url =3D $claimed_identity->check_url(
return_to =3D> "http://example.com/openid-check.app?yourarg=3Dval",
trust_root =3D> "http://example.com/",
);

# so you send the user off there, and then they come back to
# openid-check.app, then you see what the identity server said;

Is that module supposed to work some other way than
with redirects?

I thought the point was that they log into the OpenID server
and bounce back to my app. That way they never have to
trust my app with their password or other credentials.

> The moment you have to redirect to that openid server
> page, you have sidelined the entire stream of
> browser-server communication - and as you have found in
> the problem you're trying to solve - the state inherent
> therein, including the content of the original request.=A0
> Is the utilization of the stored form data going to be
> through a different connection/request entirely after
> authentication verification?=A0 Would require some tests to
> see if the client behaves that way or not.=A0 I suspect its
> not defined to be one way or the other, but I may be
> wrong.

Not following you there.

> Is the controller framework going to require me to depend
> my web system operations on some sort of semi-persistent
> pan-server correlated session state system?=A0 Would that
> not be requiring me to implement my web application in a
> particular way?=A0 Okay, that may indeed be the role of a
> framework though I'd no doubt chafe at the limitations
> myself.=A0 If I have to write my web application a certain
> way, is it so unusual to have my pages need to interact
> with that application a certain way?=A0 They're almost
> inevitably closely coupled.

That's a good point. But no, it doesn't depend on the
session, you don't have to have a session attached to use
the controller framework. You do have to have a session
attached to use the OpenID layer.

> This is a fairly sticky issue - if you have run out of
> local authentication token, its impolite to drop data they
> were submitting.=A0 But on the other hand, there's no
> particularly good way of *not* dropping it - you can't
> really handle information if they're not authenticated for
> it.=A0 And out of pure defensive network traffic handling,
> we do the absolute minimum for people who aren't
> authenticated - so they can't consume our system
> resources, be that posts-to-sessions-that-don't-exist or
> what.

That's true, that's why I think it will not try to preserve
the request body unless they already were authenticated once
and just timed out. I think that's useful.

> I can see programming the client side of the web
> application to handle this kind of token-loss-recovery
> automatically - the client has the form state and being
> able to recover the session state is valuable, and
> entirely independent from the framework utilized.=A0 But I'm
> not convinced that the web server should be jumping
> through hoops/proxies to make it happen.=A0 (not that you
> have to convince me, I'm trying to present a perspective
> that may be novel and generally be helpful in improving
> your software, and we may just disagree on the role of the
> software involved)

That's probably what DAV clients expect to do, and probably
what an AJAX client would do too. After thinking about it,
it's not clear that my conception of this module would be
useful to an AJAX application or really any other automated
type of code interface -- it would not make sense for an XML
PUT to get redispatched to a GET request for an HTML login
form. I think that in those cases you would have to
configure it with absolute URL's so that redirects to
login/register/openidserver are used, instead of internal
redispatching to login/register. An asynchronous component
would then have to watch for redirects and deal with it.

For those types of cases, it would make more sense to use a
real Authen handler than returned DECLINED if they were not
logged in, something in the style of Apache2::AuthenOpenID.
Incidentally that uses redirects too, I don't see how you
get around "side band" communication with OpenID.

Hrmm, looking at danjou's module I'm not sure if I'm doing
the token checking correctly... but maybe that is
effectively done by keeping the session id current. Hrmm,
if I passed the token as a separate cookie would that be an
extra layer of security to "prove" they owned the session
id? Not sure about this stuff.

Mark
--373334887-1230733242-1230658511=:6952--

Re: preserving request body across redirects

am 30.12.2008 20:22:13 von David Ihnen

This is a multi-part message in MIME format.
--------------010804020903080708060502
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Mark Hedges wrote:
> Thanks I really do appreciate your comments.
>
> On Mon, 29 Dec 2008, David Ihnen wrote:
>
>> Yes, I am aware of how OpenID works. And it works in-band
>> unless the application explicitly sidelines it - there is
>> no inherent side-band communication that the client and
>> server will use - otherwise, you wouldn't EVER do a main
>> state redirect.
>>
>
> It does?
It does work in-band, yes. The main session flow is going to be redirected.
> That would be great. How?
.... to do a sideline authentication? Since once the auth state is
established however, the current pages will work fine - you don't have
to do them in a linear way. Like I said, you can program the client
application to handle the interruption transparently and resubmit the
form that it maintained the status of. Or to pop up a new window to do
the auth on, or any number of variants. I'm sure you understand once
you nav away from a page through a whole-page submit, its state is (well
can be) gone for good. You're in a linear sequence of redirects at that
point and if for some reason you're not authenticated, you're not going
to regain the state.
> Why does the consumer
> object return a check url?
So that the client can use the url to check it. You know this.
> Why does it have the return_to parameter?
So that you can reinsert yourself into the flow of your web
application. Heck, many applications just land you back at the home
page - the return_to being pretty much statically configured. What
happens when you time out in the middle of a session just isn't that
critical to most specifications.
> From Net::OpenID::Consumer:
>
> # now your app has to send them at their identity server's endpoint
> # to get redirected to either a positive assertion that they own
> # that identity, or where they need to go to login/setup trust/etc.
>
> my $check_url = $claimed_identity->check_url(
> return_to => "http://example.com/openid-check.app?yourarg=val",
> trust_root => "http://example.com/",
> );
>
> # so you send the user off there, and then they come back to
> # openid-check.app, then you see what the identity server said;
>
> Is that module supposed to work some other way than
> with redirects?
>
Couldn't say for sure as I haven't closely inspected the module in
question. I don't think that this programming precludes utilization as
a side-band authenticator, though it will dictate the particular
form/method/sequence that your client-side application takes to deal
with this particular matter. That is, if the client maintained the
state of the form that it submitted while you were dealing with the
reauthentication in an iframe, your 'bounce back' url can send
instruction to the browser to resubmit the form, this time with the
session intact.
> I thought the point was that they log into the OpenID server
> and bounce back to my app. That way they never have to
> trust my app with their password or other credentials.
>
Yes! That is the point of OpenID. And most auth systems work this way,
in my experience (at least the ones that involve the like of authen
handlers rather than application level programming alone) The fact that
the server signature part is not done locally is merely a detail. The
basic redirect to page -> submit authentication token is how this stuff
generally works.
>> The moment you have to redirect to that openid server
>> page, you have sidelined the entire stream of
>> browser-server communication - and as you have found in
>> the problem you're trying to solve - the state inherent
>> therein, including the content of the original request.
>> Is the utilization of the stored form data going to be
>> through a different connection/request entirely after
>> authentication verification? Would require some tests to
>> see if the client behaves that way or not. I suspect its
>> not defined to be one way or the other, but I may be
>> wrong.
>>
>
> Not following you there.
>
When you redirect a request with a body you lose the complete state of
the original request, as you no longer have that request body. It seems
you want to save this on the server, but thats problematic

client -> POST -> load balancer -> server 556 dublin ->
saves -> redirect OPENID

client->OPENID -> verify -> redirect

in the meantime server 556 dublin suffered a network connector air gap
issue. A trouble ticket has been created. These things happen.

client -> GET -> load balancer -> server 22
london -> looks up saved requestbody !!!

This is the problem point. Your framework would be depending on the
request body save retrieve functionality to be operational on all
servers that might serve the bounceback url request. Regardless of
whether they're even in the same physical proximity or data realm. They
must somehow share a backstored saved state, or depend on the server
that saved the state being available when it needs it.

>> Is the controller framework going to require me to depend
>> my web system operations on some sort of semi-persistent
>> pan-server correlated session state system? Would that
>> not be requiring me to implement my web application in a
>> particular way? Okay, that may indeed be the role of a
>> framework though I'd no doubt chafe at the limitations
>> myself. If I have to write my web application a certain
>> way, is it so unusual to have my pages need to interact
>> with that application a certain way? They're almost
>> inevitably closely coupled.
>>
>
> That's a good point. But no, it doesn't depend on the
> session, you don't have to have a session attached to use
> the controller framework. You do have to have a session
> attached to use the OpenID layer.
>
I don't quite understand. A session of what sort, exactly? A
back-stored persistant across servers session? A secure ticket-cookie
that tells my application that this client is known-and-authenticated
already?

To my understanding, once the OpenID server has the client post the
signed result that originates from the openID provider, they're
authenticated, and beyond tracking that 'this user is an authenticated
user' in *SOME* way (keys in forms, cookies, url path fragments,
what-have-you) there is no need to maintain any concept of a session
beyond that.

Is this something about the framework that requires a backstore? Is
that going to be scalable?
>> This is a fairly sticky issue - if you have run out of
>> local authentication token, its impolite to drop data they
>> were submitting. But on the other hand, there's no
>> particularly good way of *not* dropping it - you can't
>> really handle information if they're not authenticated for
>> it. And out of pure defensive network traffic handling,
>> we do the absolute minimum for people who aren't
>> authenticated - so they can't consume our system
>> resources, be that posts-to-sessions-that-don't-exist or
>> what.
>>
>
> That's true, that's why I think it will not try to preserve
> the request body unless they already were authenticated once
> and just timed out. I think that's useful.
>
>
Here's a thought. what if you fully handled the post as it came in - a
short-time reprieve from having to do the redirect - if you already know
they WERE authenticated, just accept their slightly expired ID, handle
the form submit appropriately, and then redirect when you're done. Have
the bounceback go to the proper result page. It amounts to tri-state
session, 'good', 're-auth', and 'defunct'.

I was seriously thinking you were in a situation where you honestly
could not tell - maybe you set the cookie with an expiration date, and
its gone, its not being sent, you have no idea who this request is
coming from. Its different if you know. If you do know they are(were)
authenticated, not receiving the request is just being stubborn and
inflexible, isn't it?
>> I can see programming the client side of the web
>> application to handle this kind of token-loss-recovery
>> automatically - the client has the form state and being
>> able to recover the session state is valuable, and
>> entirely independent from the framework utilized. But I'm
>> not convinced that the web server should be jumping
>> through hoops/proxies to make it happen. (not that you
>> have to convince me, I'm trying to present a perspective
>> that may be novel and generally be helpful in improving
>> your software, and we may just disagree on the role of the
>> software involved)
>>
>
> That's probably what DAV clients expect to do, and probably
> what an AJAX client would do too. After thinking about it,
> it's not clear that my conception of this module would be
> useful to an AJAX application or really any other automated
> type of code interface -- it would not make sense for an XML
> PUT to get redispatched to a GET request for an HTML login
> form.
Heh, you have a point there. I'd be more interested in getting an error
response telling me that something had to be done than in getting
willy-nilly redirects that violate the communication protocol
established. if I'm doing some kind of RPC, redirect to html is
definitely unexpected.
> I think that in those cases you would have to
> configure it with absolute URL's so that redirects to
> login/register/openidserver are used, instead of internal
> redispatching to login/register. An asynchronous component
> would then have to watch for redirects and deal with it.
>
Hm. But arguably in the middle of a session you don't have this
problem. The session is active.
> For those types of cases, it would make more sense to use a
> real Authen handler than returned DECLINED if they were not
> logged in, something in the style of Apache2::AuthenOpenID.
>
Sounds reasonable to me. I like the flexibility of using the hooks of
Apache.
> Incidentally that uses redirects too, I don't see how you
> get around "side band" communication with OpenID.
>
You still get to decide what you do when you're not authenticated. Just
because you have a Authz hook defined does not mean you actually
redirect - its entirely up to you what you do with your handler. Maybe
it just returns an error in the appropriate protocol instead of
redirecting to somewhere else. Though the particular implementation of
Apache2::AuthenOpenID may differ from my concept of flexibility in this
regard. Subclass it? *shrug*.
> Hrmm, looking at danjou's module I'm not sure if I'm doing
> the token checking correctly... but maybe that is
> effectively done by keeping the session id current.
I think you may be on the right track there.
> Hrmm,
> if I passed the token as a separate cookie would that be an
> extra layer of security to "prove" they owned the session
> id? Not sure about this stuff.
>
Prove they own the session ID?

It took me awhile to figure out what you are suggesting.

I assume this arises because you have a session key that rather than
having the inherent session data in it, is a sequence number that could
be mangled by an end user to try and step into an alternative session
they don't own.

Easy to fix that. Brief recipe. Make cookie value with a simple
signature. Validate it. Its just shared-secret validation, but it
makes it almost impossible for the end users to mangle your cookies.
And you can change the secret if its ever compromised. Forgive if I
typo, this is off the cuff.

sub cookie_value {
my $sessionid = shift;
return join ':', $sessionid, signature($sessionid));
}

sub signature {
use Digest::MD5 qw/md5hex/;
sub secret { 'a;slho4hlzdjknv;lxza adih' }
md5hex($sessionid . secret());
}

sub get_session_from_cookie_value {
my $value = shift;
my ($session_id, $signature) = split /:/, $value;
return $session_id if ($signature eq signature($session_id));
return 0;
}

So as long as you make it reasonable difficult to mangle their cookies,
They have your token. You've got to accept that they are who you you
authenticated that token for.

You can certainly require re-authentication periodically to make it that
much more difficult that any particular token be abused - its only good
for so long, recording and replaying traffic won't give you tokens that
are valid later. (the cookie validator above could also contain a time
used to detect that state, I know mine do) But regardless of desire to
force re-authentication in a window, this does not force you to reject
the request out of hand - particularly if it passed a valid but expired
cookie, you see?

I once programmed my session cookie system to validate the signed (so
you couldn't mangle it) cookie contents against request metadata - IP
source adress, user agent string, etc. It was unworkable - turns out
that user agent changes (when accessing media files particularly) and
people are on ip pools, where diff requests can come from diff ips
within the same session. Yes, it made it almost impossible for people
to snarf tokens and use them illegitimately, but it also made normal
operation frustratingly unreliable. (Instead I ended up tracking the
changes and watched for really odd things for abuse, like one token or
user being used across dozens of ips)

David


--------------010804020903080708060502
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit







Mark Hedges wrote:
cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">

Thanks I really do appreciate your comments.

On Mon, 29 Dec 2008, David Ihnen wrote:


Yes, I am aware of how OpenID works.  And it works in-band
unless the application explicitly sidelines it - there is
no inherent side-band communication that the client and
server will use - otherwise, you wouldn't EVER do a main
state redirect. 



It does?


It does work in-band, yes.  The main session flow is going to be
redirected.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
That would be great.  How? 


.... to do a sideline authentication?   Since once the auth state is
established however, the current pages will work fine - you don't have
to do them in a linear way.  Like I said, you can program the client
application to handle the interruption transparently and resubmit the
form that it maintained the status of.  Or to pop up a new window to do
the auth on, or any number of variants.  I'm sure you understand once
you nav away from a page through a whole-page submit, its state is
(well can be) gone for good.  You're in a linear sequence of redirects
at that point and if for some reason you're not authenticated, you're
not going to regain the state.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
 Why does the consumer
object return a check url?


So that the client can use the url to check it.  You know this.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
Why does it have the return_to parameter? 


So that you can reinsert yourself into the flow of your web
application.  Heck, many applications just land you back at the home
page - the return_to being pretty much statically configured.  What
happens when you time out in the middle of a session just isn't that
critical to most specifications.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
 From Net::OpenID::Consumer:

# now your app has to send them at their identity server's endpoint
# to get redirected to either a positive assertion that they own
# that identity, or where they need to go to login/setup trust/etc.

my $check_url = $claimed_identity->check_url(
return_to => ,
trust_root => ,
);

# so you send the user off there, and then they come back to
# openid-check.app, then you see what the identity server said;

Is that module supposed to work some other way than
with redirects?


Couldn't say for sure as I haven't closely inspected the module in
question.  I don't think that this programming precludes utilization as
a side-band authenticator, though it will dictate the particular
form/method/sequence that your client-side application takes to deal
with this particular matter.  That is, if the client maintained the
state of the form that it submitted while you were dealing with the
reauthentication in an iframe, your 'bounce back' url can send
instruction to the browser to resubmit the form, this time with the
session intact.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
I thought the point was that they log into the OpenID server
and bounce back to my app. That way they never have to
trust my app with their password or other credentials.


Yes!  That is the point of OpenID.  And most auth systems work this
way, in my experience (at least the ones that involve the like of
authen handlers rather than application level programming alone) The
fact that the server signature part is not done locally is merely a
detail.  The basic redirect to page -> submit authentication token
is how this stuff generally works.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">

The moment you have to redirect to that openid server
page, you have sidelined the entire stream of
browser-server communication - and as you have found in
the problem you're trying to solve - the state inherent
therein, including the content of the original request. 
Is the utilization of the stored form data going to be
through a different connection/request entirely after
authentication verification?  Would require some tests to
see if the client behaves that way or not.  I suspect its
not defined to be one way or the other, but I may be
wrong.



Not following you there.


When you redirect a request with a body you lose the complete state of
the original request, as you no longer have that request body.  It
seems you want to save this on the server, but thats problematic



client -> POST <formhandler> -> load balancer -> server
556 dublin -> saves<requestbody> -> redirect OPENID
<success bounceback url>



client->OPENID -> verify -> redirect <success bounceback
url>



in the meantime server 556 dublin suffered a network connector air gap
issue.  A trouble ticket has been created.  These things happen.



client -> GET <success bounceback url> -> load balancer
-> server 22 london -> looks up saved requestbody !!!



This is the problem point.  Your framework would be depending on the
request body save retrieve functionality to be operational on all
servers that might serve the bounceback url request.  Regardless of
whether they're even in the same physical proximity or data realm. 
They must somehow share a backstored saved state, or depend on the
server that saved the state being available when it needs it.



cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">

Is the controller framework going to require me to depend
my web system operations on some sort of semi-persistent
pan-server correlated session state system?  Would that
not be requiring me to implement my web application in a
particular way?  Okay, that may indeed be the role of a
framework though I'd no doubt chafe at the limitations
myself.  If I have to write my web application a certain
way, is it so unusual to have my pages need to interact
with that application a certain way?  They're almost
inevitably closely coupled.



That's a good point. But no, it doesn't depend on the
session, you don't have to have a session attached to use
the controller framework. You do have to have a session
attached to use the OpenID layer.


I don't quite understand.  A session of what sort, exactly?  A
back-stored persistant across servers session?  A secure ticket-cookie
that tells my application that this client is known-and-authenticated
already? 



To my understanding, once the OpenID server has the client post the
signed result that originates from the openID provider, they're
authenticated, and beyond tracking that 'this user is an authenticated
user' in *SOME* way (keys in forms, cookies, url path fragments,
what-have-you) there is no need to maintain any concept of a session
beyond that.



Is this something about the framework that requires a backstore?  Is
that going to be scalable?

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">

This is a fairly sticky issue - if you have run out of
local authentication token, its impolite to drop data they
were submitting.  But on the other hand, there's no
particularly good way of *not* dropping it - you can't
really handle information if they're not authenticated for
it.  And out of pure defensive network traffic handling,
we do the absolute minimum for people who aren't
authenticated - so they can't consume our system
resources, be that posts-to-sessions-that-don't-exist or
what.



That's true, that's why I think it will not try to preserve
the request body unless they already were authenticated once
and just timed out. I think that's useful.



Here's a thought.  what if you fully handled the post as it came in - a
short-time reprieve from having to do the redirect - if you already
know they WERE authenticated, just accept their slightly expired ID,
handle the form submit appropriately, and then redirect when you're
done.  Have the bounceback go to the proper result page.  It amounts to
tri-state session, 'good', 're-auth', and 'defunct'.



I was seriously thinking you were in a situation where you honestly
could not tell - maybe you set the cookie with an expiration date, and
its gone, its not being sent, you have no idea who this request is
coming from.  Its different if you know.  If you do know they are(were)
authenticated, not receiving the request is just being stubborn and
inflexible, isn't it?

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">

I can see programming the client side of the web
application to handle this kind of token-loss-recovery
automatically - the client has the form state and being
able to recover the session state is valuable, and
entirely independent from the framework utilized.  But I'm
not convinced that the web server should be jumping
through hoops/proxies to make it happen.  (not that you
have to convince me, I'm trying to present a perspective
that may be novel and generally be helpful in improving
your software, and we may just disagree on the role of the
software involved)



That's probably what DAV clients expect to do, and probably
what an AJAX client would do too. After thinking about it,
it's not clear that my conception of this module would be
useful to an AJAX application or really any other automated
type of code interface -- it would not make sense for an XML
PUT to get redispatched to a GET request for an HTML login
form.


Heh, you have a point there.  I'd be more interested in getting an
error response telling me that something had to be done than in getting
willy-nilly redirects that violate the communication protocol
established.  if I'm doing some kind of RPC, redirect to html is
definitely unexpected.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
 I think that in those cases you would have to
configure it with absolute URL's so that redirects to
login/register/openidserver are used, instead of internal
redispatching to login/register. An asynchronous component
would then have to watch for redirects and deal with it.


Hm.  But arguably in the middle of a session you don't have this
problem.  The session is active.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
For those types of cases, it would make more sense to use a
real Authen handler than returned DECLINED if they were not
logged in, something in the style of Apache2::AuthenOpenID.


Sounds reasonable to me.  I like the flexibility of using the hooks of
Apache.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
Incidentally that uses redirects too, I don't see how you
get around "side band" communication with OpenID.


You still get to decide what you do when you're not authenticated. 
Just because you have a Authz hook defined does not mean you actually
redirect - its entirely up to you what you do with your handler.  Maybe
it just returns an error in the appropriate protocol instead of
redirecting to somewhere else.  Though the particular implementation of
Apache2::AuthenOpenID may differ from my concept of flexibility in this
regard.  Subclass it?  *shrug*.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
Hrmm, looking at danjou's module I'm not sure if I'm doing
the token checking correctly... but maybe that is
effectively done by keeping the session id current.


I think you may be on the right track there.

cite="mid:alpine.DEB.1.10.0812300858210.6952@li16-163.member s.linode.com"
type="cite">
 Hrmm,
if I passed the token as a separate cookie would that be an
extra layer of security to "prove" they owned the session
id? Not sure about this stuff.


Prove they own the session ID?



It took me awhile to figure out what you are suggesting. 



I assume this arises because you have a session key that rather than
having the inherent session data in it, is a sequence number that could
be mangled by an end user to try and step into an alternative session
they don't own.



Easy to fix that.  Brief recipe.  Make cookie value with a simple
signature.  Validate it.  Its just shared-secret validation, but it
makes it almost impossible for the end users to mangle your cookies. 
And you can change the secret if its ever compromised.  Forgive if I
typo, this is off the cuff.



sub cookie_value {

  my $sessionid = shift;

  return join ':', $sessionid, signature($sessionid));

}



sub signature {

  use Digest::MD5 qw/md5hex/;

  
sub secret { 'a;slho4hlzdjknv;lxza adih' }

  md5hex($sessionid . secret());

}



sub get_session_from_cookie_value {

  my $value = shift;

  my ($session_id, $signature) = split /:/, $value;

  return $session_id if ($signature eq signature($session_id));

  return 0;

}



So as long as you make it reasonable difficult to mangle their cookies,
They have your token.  You've got to accept that they are who you you
authenticated that token for. 



You can certainly require re-authentication periodically to make it
that much more difficult that any particular token be abused - its only
good for so long, recording and replaying traffic won't give you tokens
that are valid later.  (the cookie validator above could also contain a
time used to detect that state, I know mine do)  But regardless of
desire to force re-authentication in a window, this does not force you
to reject the request out of hand - particularly if it passed a valid
but expired cookie, you see?



I once programmed my session cookie system to validate the signed (so
you couldn't mangle it) cookie contents against request metadata - IP
source adress, user agent string, etc.  It was unworkable - turns out
that user agent changes (when accessing media files particularly) and
people are on ip pools, where diff requests can come from diff ips
within the same session.  Yes, it made it almost impossible for people
to snarf tokens and use them illegitimately, but it also made normal
operation frustratingly unreliable.  (Instead I ended up tracking the
changes and watched for really odd things for abuse, like one token or
user being used across dozens of ips)



David






--------------010804020903080708060502--

Re: preserving request body across redirects

am 30.12.2008 20:49:21 von Mark Hedges

This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.

--373334887-1410903774-1230666561=:10784
Content-Type: TEXT/PLAIN; charset=ISO-8859-1
Content-Transfer-Encoding: QUOTED-PRINTABLE


On Tue, 30 Dec 2008, David Ihnen wrote:
>
> in the meantime server 556 dublin suffered a network
> connector air gap issue.=A0 A trouble ticket has been
> created.=A0 These things happen.
>
> client -> GET -> load balancer ->
> server 22 london -> looks up saved requestbody !!!
>
> This is the problem point.=A0 Your framework would be
> depending on the request body save retrieve functionality
> to be operational on all servers that might serve the
> bounceback url request.=A0 Regardless of whether they're
> even in the same physical proximity or data realm.=A0 They
> must somehow share a backstored saved state, or depend on
> the server that saved the state being available when it
> needs it.

Yes. Normally any server application that depends on
persistent session data would have to depend on consistently
sharing that session data. This authentication method would
require the application to implement persistent shared
session data by hooking up Apache2::Controller::Session
somehow. This uses some variant of Apache::Session.

The man page for "Session" says "tie" is too slow for web
apps? Do I help anything by tying first and giving you a
clone of the hash, then only saving the clone back to the
tied hash after the response phase is done?

I'm doing this for apps that would require users to register
and associate an openid_url with an account username on the
local app. That requires some sort of server storage
anyway. If any OpenID server will do, and unique users
aren't required to register, other modules are available.

> Here's a thought.=A0 what if you fully handled the post as
> it came in - a short-time reprieve from having to do the
> redirect - if you already know they WERE authenticated,
> just accept their slightly expired ID, handle the form
> submit appropriately, and then redirect when you're done.=A0
> Have the bounceback go to the proper result page.=A0 It
> amounts to tri-state session, 'good', 're-auth', and
> 'defunct'.

Sort of defeats the purpose of session timeout, or how I
think most people would think about that... but you could
get this behavior by setting an infinite timeout.

> I was seriously thinking you were in a situation where you
> honestly could not tell - maybe you set the cookie with an
> expiration date, and its gone, its not being sent, you
> have no idea who this request is coming from.=A0 Its
> different if you know.=A0 If you do know they are(were)
> authenticated, not receiving the request is just being
> stubborn and inflexible, isn't it?

Well yes, but that's the point of saving it while they
re-authenticate ... session timeouts are usually used to
make sure people can't come up to an idle computer and fill
out some form with bogus stuff.

> > PUT to get redispatched to a GET request for an HTML
> > login form.
>
> Heh, you have a point there.=A0 I'd be more interested in
> getting an error response telling me that something had to
> be done than in getting willy-nilly redirects that violate
> the communication protocol established.=A0 if I'm doing some
> kind of RPC, redirect to html is definitely unexpected.

I guess Apache2::AuthenOpenID could be subclassed to spit
out something other than html in set_custom_response().

Well, the same is true for my implementation, since it
internally redispatches relative uri's or redirects absolute
url's... in that kind of application it can send you to
controller subroutines for 'login', 'register' etc. that
spit out something meaningful to the client.

So I don't think this is a big issue now that I think about
it. Apache2::Controller:Auth::OpenID doesn't require you to
spit out a web form. If you wrote an AJAX application and
used A2C to implement the back-end, then your front end
would get the browser code source from an un-authenticated
url, and would control submission of credentials in whatever
way you wanted - you have control over front and back ends
so you can do it whichever way you want.

> > Hrmm, if I passed the token as a separate cookie would
> > that be an extra layer of security to "prove" they owned
> > the session id? Not sure about this stuff.
>
> Prove they own the session ID?
>
> It took me awhile to figure out what you are suggesting.=A0
>
> I assume this arises because you have a session key that rather than havi=
ng the
> inherent session data in it, is a sequence number that could be mangled b=
y an end
> user to try and step into an alternative session they don't own.
>
> Easy to fix that.=A0 Brief recipe.=A0 Make cookie value with a
> simple signature.=A0 ... So as long as you make it
> reasonable difficult to mangle their cookies, They have
> your token.=A0 You've got to accept that they are who you
> you authenticated that token for.=A0

Awesome, thanks for explaining that, thanks for the tips.
I will try to implement that checksum recipe in
Apache2::Controller::Session regardless of whether they're
using OpenID.

Mark

--373334887-1410903774-1230666561=:10784--

Re: preserving request body across redirects

am 30.12.2008 21:47:21 von David Ihnen

This is a multi-part message in MIME format.
--------------010004040903050701000906
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Mark Hedges wrote:
> On Tue, 30 Dec 2008, David Ihnen wrote:
>
>> in the meantime server 556 dublin suffered a network
>> connector air gap issue. A trouble ticket has been
>> created. These things happen.
>>
>> client -> GET -> load balancer ->
>> server 22 london -> looks up saved requestbody !!!
>>
>> This is the problem point. Your framework would be
>> depending on the request body save retrieve functionality
>> to be operational on all servers that might serve the
>> bounceback url request. Regardless of whether they're
>> even in the same physical proximity or data realm. They
>> must somehow share a backstored saved state, or depend on
>> the server that saved the state being available when it
>> needs it.
>>
>
> Yes. Normally any server application that depends on
> persistent session data would have to depend on consistently
> sharing that session data.
Yes. But timtowdti on how that information is distributed. In my
opinion any *framework* must not depend on the *application* having
established a persistent backstore of shared session data, so that it
can persist put/posts. You're *significantly* constraining the
parameters of the implementations utilizing the framework by requiring
this, which I consider to be exactly what frameworks shouldn't do. We
may disagree. :)
> The man page for "Session" says "tie" is too slow for web
> apps?
*blank look* Depends on the overhead of your tie I guess. What are you
tying?
> Do I help anything by tying first and giving you a
> clone of the hash, then only saving the clone back to the
> tied hash after the response phase is done?
>
Help as opposed to what? Earlier availability of data, I might
speculate. But not sure what you're talking about exactly.
> I'm doing this for apps that would require users to register
> and associate an openid_url with an account username on the
> local app. That requires some sort of server storage
> anyway. If any OpenID server will do, and unique users
> aren't required to register, other modules are available.
>
It requires server storage to know what usernames are available in this
situation *yes*. But that is merely at authentication time - not during
the lifetime of the session. An auth service is not a session service.

It does not require server session storage inherently to associate a
session cookie with a user - the cookie has plenty of storage to do that
itself.
>> Here's a thought. what if you fully handled the post as
>> it came in - a short-time reprieve from having to do the
>> redirect - if you already know they WERE authenticated,
>> just accept their slightly expired ID, handle the form
>> submit appropriately, and then redirect when you're done.
>> Have the bounceback go to the proper result page. It
>> amounts to tri-state session, 'good', 're-auth', and
>> 'defunct'.
>>
>
> Sort of defeats the purpose of session timeout, or how I
> think most people would think about that... but you could
> get this behavior by setting an infinite timeout.
>
Your session times out when you say it times out. And changes states
when you say it changes states. You can have two time periods - one
being re-auth request time period, another being true expiration time
period. The purpose of a session timeout is to stop large time delta
recycling of session data. The purpose of re-auth time period is to
nudge your flow into getting a new authentication token without the
interruption of actual logout. Neither of these is an infinite state,
and would not be replicated by infinite timeout.
>> I was seriously thinking you were in a situation where you
>> honestly could not tell - maybe you set the cookie with an
>> expiration date, and its gone, its not being sent, you
>> have no idea who this request is coming from. Its
>> different if you know. If you do know they are(were)
>> authenticated, not receiving the request is just being
>> stubborn and inflexible, isn't it?
>>
>
> Well yes, but that's the point of saving it while they
> re-authenticate ... session timeouts are usually used to
> make sure people can't come up to an idle computer and fill
> out some form with bogus stuff.
>
How big a problem is that, exactly? In my estimation, inconsequential.
And is the result of the submit un-revokable? The _application_ can do
something trivial like display a confirmation page when the session
state on submit was in the re-auth state. Not an issue to me
framework-wise because that is application layer logic - the framework
should inform the application of the state, but the framework must not
force the application to have a backstore session state so that it
doesn't randomly drop form submits. (particularly one that the
framework scribbles on!)

And besides, if their auth-was-cached as I recall in the original
supposition, the idle computer would still behave - and reauth - just
like a properly attended one, making this entire point moot.

If it wasn't, the application still doesn't have to actually fully
accept the post, just accept it to the point of asking for authenticated
confirmation. That way the application isn't interrupted and
out-thought by the framework.
>>> Hrmm, if I passed the token as a separate cookie would
>>> that be an extra layer of security to "prove" they owned
>>> the session id? Not sure about this stuff.
>>>
>> Prove they own the session ID?
>>
>> It took me awhile to figure out what you are suggesting.
>>
>> I assume this arises because you have a session key that rather than having the
>> inherent session data in it, is a sequence number that could be mangled by an end
>> user to try and step into an alternative session they don't own.
>>
>> Easy to fix that. Brief recipe. Make cookie value with a
>> simple signature. ... So as long as you make it
>> reasonable difficult to mangle their cookies, They have
>> your token. You've got to accept that they are who you
>> you authenticated that token for.
>>
>
> Awesome, thanks for explaining that, thanks for the tips.
> I will try to implement that checksum recipe in
> Apache2::Controller::Session regardless of whether they're
> using OpenID.
>
Woot! This is a good thing. You might also consider using Data::UUID
to create guids, instead of an arbitrary number. They're easier to keep
unique and harder to user mangle.

(er, as disclosure, that basic sign-a-token-with-md5 idea was learned
from the perl cookbook)

I have programmed entire (admitedly rather static) website that utilizes
this pattern for its session/user state - you can hit any server in the
farm and the session state is in the request, and verifiable as intact
by the checksum. No backstore hits/ties are necessary beyond logging in.

But when it comes to session states, you can set a whole stack of
cookies to track various states, or mangle up your url paths and rewrite
them in handlers, or whatever suits your situation to keep state across
requests. Backstore comes with weight and complication, I seriously
believe that a framework should not depend on it.

David


--------------010004040903050701000906
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit








Mark Hedges wrote:
cite="mid:alpine.DEB.1.10.0812301123110.10784@li16-163.membe rs.linode.com"
type="cite">

On Tue, 30 Dec 2008, David Ihnen wrote:


in the meantime server 556 dublin suffered a network
connector air gap issue.  A trouble ticket has been
created.  These things happen.

client -> GET <success bounceback url> -> load balancer ->
server 22 london -> looks up saved requestbody !!!

This is the problem point.  Your framework would be
depending on the request body save retrieve functionality
to be operational on all servers that might serve the
bounceback url request.  Regardless of whether they're
even in the same physical proximity or data realm.  They
must somehow share a backstored saved state, or depend on
the server that saved the state being available when it
needs it.



Yes. Normally any server application that depends on
persistent session data would have to depend on consistently
sharing that session data.


Yes.  But timtowdti on how that information is distributed.  In my
opinion any *framework* must not depend on the *application* having
established a persistent backstore of shared session data, so that it
can persist put/posts.  You're *significantly* constraining the
parameters of the implementations utilizing the framework by requiring
this, which I consider to be exactly what frameworks shouldn't do.  We
may disagree.  :)

cite="mid:alpine.DEB.1.10.0812301123110.10784@li16-163.membe rs.linode.com"
type="cite">
The man page for "Session" says "tie" is too slow for web
apps?


*blank look*  Depends on the overhead of your tie I guess.  What are
you tying?

cite="mid:alpine.DEB.1.10.0812301123110.10784@li16-163.membe rs.linode.com"
type="cite">
 Do I help anything by tying first and giving you a
clone of the hash, then only saving the clone back to the
tied hash after the response phase is done?


Help as opposed to what?  Earlier availability of data, I might
speculate.  But not sure what you're talking about exactly.

cite="mid:alpine.DEB.1.10.0812301123110.10784@li16-163.membe rs.linode.com"
type="cite">
I'm doing this for apps that would require users to register
and associate an openid_url with an account username on the
local app. That requires some sort of server storage
anyway. If any OpenID server will do, and unique users
aren't required to register, other modules are available.


It requires server storage to know what usernames are available in this
situation *yes*.  But that is merely at authentication time - not
during the lifetime of the session.  An auth service is not a session
service.



It does not require server session storage inherently to associate a
session cookie with a user - the cookie has plenty of storage to do
that itself.

cite="mid:alpine.DEB.1.10.0812301123110.10784@li16-163.membe rs.linode.com"
type="cite">

Here's a thought.  what if you fully handled the post as
it came in - a short-time reprieve from having to do the
redirect - if you already know they WERE authenticated,
just accept their slightly expired ID, handle the form
submit appropriately, and then redirect when you're done. 
Have the bounceback go to the proper result page.  It
amounts to tri-state session, 'good', 're-auth', and
'defunct'.



Sort of defeats the purpose of session timeout, or how I
think most people would think about that... but you could
get this behavior by setting an infinite timeout.


Your session times out when you say it times out.  And changes states
when you say it changes states.  You can have two time periods - one
being re-auth request time period, another being true expiration time
period.  The purpose of a session timeout is to stop large time delta
recycling of session data.  The purpose of re-auth time period is to
nudge your flow into getting a new authentication token without the
interruption of actual logout.  Neither of these is an infinite state,
and would not be replicated by infinite timeout.

cite="mid:alpine.DEB.1.10.0812301123110.10784@li16-163.membe rs.linode.com"
type="cite">

I was seriously thinking you were in a situation where you
honestly could not tell - maybe you set the cookie with an
expiration date, and its gone, its not being sent, you
have no idea who this request is coming from.  Its
different if you know.  If you do know they are(were)
authenticated, not receiving the request is just being
stubborn and inflexible, isn't it?



Well yes, but that's the point of saving it while they
re-authenticate ... session timeouts are usually used to
make sure people can't come up to an idle computer and fill
out some form with bogus stuff.


How big a problem is that, exactly?  In my estimation,
inconsequential.  And is the result of the submit un-revokable?  The
_application_ can do something trivial like display a confirmation page
when the session state on submit was in the re-auth state.  Not an
issue to me framework-wise because that is application layer logic -
the framework should inform the application of the state, but the
framework must not force the application to have a backstore session
state so that it doesn't randomly drop form submits.  (particularly one
that the framework scribbles on!)



And besides, if their auth-was-cached as I recall in the original
supposition, the idle computer would still behave - and reauth - just
like a properly attended one, making this entire point moot.



If it wasn't, the application still doesn't have to actually fully
accept the post, just accept it to the point of asking for
authenticated confirmation.  That way the application isn't interrupted
and out-thought by the framework.

cite="mid:alpine.DEB.1.10.0812301123110.10784@li16-163.membe rs.linode.com"
type="cite">


Hrmm, if I passed the token as a separate cookie would
that be an extra layer of security to "prove" they owned
the session id? Not sure about this stuff.


Prove they own the session ID?

It took me awhile to figure out what you are suggesting. 

I assume this arises because you have a session key that rather than having the
inherent session data in it, is a sequence number that could be mangled by an end
user to try and step into an alternative session they don't own.

Easy to fix that.  Brief recipe.  Make cookie value with a
simple signature.  ... So as long as you make it
reasonable difficult to mangle their cookies, They have
your token.  You've got to accept that they are who you
you authenticated that token for. 



Awesome, thanks for explaining that, thanks for the tips.
I will try to implement that checksum recipe in
Apache2::Controller::Session regardless of whether they're
using OpenID.


Woot!  This is a good thing.  You might also consider using
to create guids, instead of an arbitrary number.  They're easier to
keep unique and harder to user mangle.



(er, as disclosure, that basic sign-a-token-with-md5 idea was learned
from the perl cookbook) 



I have programmed entire (admitedly rather static) website that
utilizes this pattern for its session/user state - you can hit any
server in the farm and the session state is in the request, and
verifiable as intact by the checksum.  No backstore hits/ties are
necessary beyond logging in.



But when it comes to session states, you can set a whole stack of
cookies to track various states, or mangle up your url paths and
rewrite them in handlers, or whatever suits your situation to keep
state across requests.  Backstore comes with weight and complication, I
seriously believe that a framework should not depend on it.



David






--------------010004040903050701000906--

Re: preserving request body across redirects

am 31.12.2008 21:44:28 von Mark Hedges

This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.

--373334887-896570495-1230756054=:4074
Content-Type: TEXT/PLAIN; CHARSET=ISO-8859-15
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID:


On Wed, 31 Dec 2008, Foo JH wrote:
> Mark Hedges wrote:
> > http://search.cpan.org/~markle/Apache2-Controller-1.000.001/
> Very interesting. I have a controller which functions in a
> slightly different way, but it's good to see alternative
> approaches. I think yours makes life easier.

Thanks Foo! I've seen the idea implemented a number of
ways, open source and not. Part of my point was proof of
concept that a lot of the abstractions were useless hoops to
jump through -- a lot of times people use references to
other structures when they should subclass... these
references function only to re-map arguments to other
modules, which is ridiculous.

On Tue, 30 Dec 2008, David Ihnen wrote:
> Yes.=A0 But timtowdti on how that information is
> distributed.=A0 In my opinion any *framework* must not
> depend on the *application* having established a
> persistent backstore of shared session data, so that it
> can persist put/posts.=A0 You're *significantly*
> constraining the parameters of the implementations
> utilizing the framework by requiring this, which I
> consider to be exactly what frameworks shouldn't do.=A0 We
> may disagree.=A0 :)

Oh no, the framework doesn't depend on using a session at
all. Just this particular auth module depends on plugging
in the session beforehand.

> It requires server storage to know what usernames are
> available in this situation *yes*.=A0 But that is merely at
> authentication time - not during the lifetime of the
> session.=A0 An auth service is not a session service.
>
> It does not require server session storage inherently to
> associate a session cookie with a user - the cookie has
> plenty of storage to do that itself.

Hrmm. I'll think about that... maybe the only thing
necessary is that a database handle is available from
Apache2::Controller::DBI::Connector.

> Your session times out when you say it times out.=A0 And
> changes states when you say it changes states.=A0 You can
> have two time periods - one being re-auth request time
> period, another being true expiration time period.=A0 The
> purpose of a session timeout is to stop large time delta
> recycling of session data.=A0 The purpose of re-auth time
> period is to nudge your flow into getting a new
> authentication token without the interruption of actual
> logout.=A0 Neither of these is an infinite state, and would
> not be replicated by infinite timeout.

That could work. Sort of want to avoid a proliferation of
directives... but I guess that's a potential control that
would be less complicated and wouldn't require a persistent
session store on the server. And can set a flag in notes
that this happened for the app to deal with it if it wants
to, as you said.

Data::UUID's are a good idea for the auth tracking cookie.
For the backend-session plugin right now I'm just using the
_session_id from Apache::Session as the cookie value.
--373334887-896570495-1230756054=:4074--

Re: preserving request body across redirects

am 31.12.2008 22:25:32 von David Ihnen

This is a multi-part message in MIME format.
--------------010401030109040402000909
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 7bit

Mark Hedges wrote:
> - a lot of times people use references to
> other structures when they should subclass... these
> references function only to re-map arguments to other
> modules, which is ridiculous.
>
Careful on the should. It can seem extra and possibly confusing but
isn't always. Delegation is a valid pattern that is cleaner than
inheriting at times, particularly when you're mixing in a few different
modules at the same time to do something. If you're merely extending an
existing, then yes, inheritance is good. Multi-directional
multi-inheritance can get really messy... (as I dug myself out of in
development recently myself) if you haven't read, at least scan...

http://www.perldesignpatterns.com/?MixIns
http://www.perldesignpatterns.com/?DelegationConcept
http://www.perldesignpatterns.com/?CompositePattern

And forgive me if I'm too talky on the subject. :)
> On Tue, 30 Dec 2008, David Ihnen wrote:
>
>> Yes. But timtowdti on how that information is
>> distributed. In my opinion any *framework* must not
>> depend on the *application* having established a
>> persistent backstore of shared session data, so that it
>> can persist put/posts. You're *significantly*
>> constraining the parameters of the implementations
>> utilizing the framework by requiring this, which I
>> consider to be exactly what frameworks shouldn't do. We
>> may disagree. :)
>>
>
> Oh no, the framework doesn't depend on using a session at
> all. Just this particular auth module depends on plugging
> in the session beforehand.
>
So I have to have a persistant backstoring session in order to use the
auth module?
>> It requires server storage to know what usernames are
>> available in this situation *yes*. But that is merely at
>> authentication time - not during the lifetime of the
>> session. An auth service is not a session service.
>>
>> It does not require server session storage inherently to
>> associate a session cookie with a user - the cookie has
>> plenty of storage to do that itself.
>>
>
> Hrmm. I'll think about that... maybe the only thing
> necessary is that a database handle is available from
> Apache2::Controller::DBI::Connector.
>

I'd think 'implementation subclass must implement a method that returns
validity of username' would be more flexible and concise. You could use
apache environment variables and other request state information to pass
extra data to that function based on what or or
its in, as a forinstance.

That way you wouldn't attempt to corral the user into using a particular
method of acquiring user data. I mean, think of the directive
proliferation that impends from trying to configure all the DBI stuff.
Sure, perhaps your connector subclass defines the connection string and
all that hooey (hopefully THAT stuff isn't in the directive
proliferation...) - but how do you query in this database once you have
the connection? Do you define table, column, where conditions, joins?
An arbitrary SQL string (language in language, ack!) What if I did
things way differently... and had an external service providing the data
that doesn't even require a query, as such, but of course I could layer
through DBI if I wanted/had to (SELECT openid_identifier FROM USERS
WHERE username = ? translating into a json-rpc call to my user store
service {id:1, method:'get_users', parameters:[?]) that really does do
the same job as a query, conceptually, but does not utilize the
assumption that we have a database backed store available at all?

Example code could show how a dbi connection could be used and
concisely, but is that part of the requirements of utilizing the framework?

David


--------------010401030109040402000909
Content-Type: text/html; charset=ISO-8859-15
Content-Transfer-Encoding: 8bit




http-equiv="Content-Type">



Mark Hedges wrote:

cite="mid:alpine.DEB.1.10.0812311231130.4074@li16-163.member s.linode.com"
type="cite">

- a lot of times people use references to
other structures when they should subclass... these
references function only to re-map arguments to other
modules, which is ridiculous.


Careful on the should.  It can seem extra and possibly confusing but
isn't always.  Delegation is a valid pattern that is cleaner than
inheriting at times, particularly when you're mixing in a few different
modules at the same time to do something.  If you're merely extending
an existing, then yes, inheritance is good.  Multi-directional
multi-inheritance can get really messy... (as I dug myself out of in
development recently myself)  if you haven't read, at least scan...











And forgive me if I'm too talky on the subject.  :)

cite="mid:alpine.DEB.1.10.0812311231130.4074@li16-163.member s.linode.com"
type="cite">
On Tue, 30 Dec 2008, David Ihnen wrote:


Yes.  But timtowdti on how that information is
distributed.  In my opinion any *framework* must not
depend on the *application* having established a
persistent backstore of shared session data, so that it
can persist put/posts.  You're *significantly*
constraining the parameters of the implementations
utilizing the framework by requiring this, which I
consider to be exactly what frameworks shouldn't do.  We
may disagree.  :)



Oh no, the framework doesn't depend on using a session at
all. Just this particular auth module depends on plugging
in the session beforehand.


So I have to have a persistant backstoring session in order to use the
auth module? 

cite="mid:alpine.DEB.1.10.0812311231130.4074@li16-163.member s.linode.com"
type="cite">

It requires server storage to know what usernames are
available in this situation *yes*.  But that is merely at
authentication time - not during the lifetime of the
session.  An auth service is not a session service.

It does not require server session storage inherently to
associate a session cookie with a user - the cookie has
plenty of storage to do that itself.



Hrmm. I'll think about that... maybe the only thing
necessary is that a database handle is available from
Apache2::Controller::DBI::Connector.




I'd think 'implementation subclass must implement a method
that returns validity of username' would be more flexible and concise. 
You could use apache environment variables and other request state
information to pass extra data to that function based on what
<file> or <location> or <directory> its in, as a
forinstance.




That way you wouldn't attempt to corral the user into using a
particular method of acquiring user data.  I mean, think of the
directive proliferation that impends from trying to configure all the
DBI stuff.  Sure, perhaps your connector subclass defines the
connection string and all that hooey (hopefully THAT stuff isn't in the
directive proliferation...) - but how do you query in this database
once you have the connection?  Do you define table, column, where
conditions, joins?  An arbitrary SQL string (language in language,
ack!)  What if I did things way differently... and had an external
service providing the data that doesn't even require a query, as such,
but of course I could layer through DBI if I wanted/had to (SELECT
openid_identifier FROM USERS WHERE username = ?   translating into a
json-rpc call to my user store service {id:1, method:'get_users',
parameters:[?]) that really does do the same job as a query,
conceptually, but does not utilize the assumption that we have a
database backed store available at all? 



Example code could show how a dbi connection could be used and
concisely, but is that part of the requirements of utilizing the
framework?



David






--------------010401030109040402000909--

Re: preserving request body across redirects

am 31.12.2008 22:34:28 von Mark Hedges

This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.

--373334887-1353785418-1230759159=:6571
Content-Type: TEXT/PLAIN; CHARSET=ISO-8859-15
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID:


> > Oh no, the framework doesn't depend on using a session
> > at all. Just this particular auth module depends on
> > plugging in the session beforehand.
>
> So I have to have a persistant backstoring session in
> order to use the auth module?=A0

Well, now, but as you recommended, maybe not necessary.

> I'd think 'implementation subclass must implement a method
> that returns validity of username' would be more flexible
> and concise.=A0 You could use apache environment variables
> and other request state information to pass extra data to
> that function based on what or or
> its in, as a forinstance.

Well, it would get $r. Yes, I thought it would be a good
idea to rework it so getting the username is its own method,
that way you could subclass and get it from LDAP or
whatever. Actually in that case, it's not clear that even a
DBI handle is necessary for the top-level implementation,
although if you use the connector handler to get one in your
subclass you could do joins or whatever you need to do.
Then I can also drop the directives for the table name and
field names.

Thanks for the advice!

There may be something wrong with your mail program, or with
alpine's way of reading your mail. It's hard to sort out
where I wrote and where you wrote because everything has
only one level of '> ' quotes.

Mark
--373334887-1353785418-1230759159=:6571--

a2c controller method names

am 01.01.2009 15:23:26 von Mark Hedges

This message is in MIME format. The first part should be readable text,
while the remaining parts are likely unreadable without MIME-aware tools.

--373334887-184512681-1230819806=:18271
Content-Type: TEXT/PLAIN; charset=ISO-8859-15
Content-Transfer-Encoding: QUOTED-PRINTABLE

> > - a lot of times people use references to other
> > structures when they should subclass... these references
> > function only to re-map arguments to other modules,
> > which is ridiculous.
>
>
> Careful on the should.=A0 It can seem extra and possibly
> confusing but isn't always.=A0 Delegation is a valid pattern
> that is cleaner than inheriting at times, particularly
> when you're mixing in a few different modules at the same
> time to do something.=A0 If you're merely extending an
> existing, then yes, inheritance is good.=A0
> Multi-directional multi-inheritance can get really
> messy... (as I dug myself out of in development recently
> myself)=A0 if you haven't read, at least scan...

Regarding your comment about inheritance vs. references -
something I hadn't thought much about. A) I need to prefix
all my internal method names with 'a2c_' to stay out of
the controller namespace. B) You can't have any controller
subroutines with the same names as anything in the
Apache2::Request* family, which (slightly) limits your
choice of allowable URL's. But I think it's worth it so
that '$self' is the Apache object. What do people think?

I wonder how easy it would be to add controller subroutine
attributes for which ones are allowed or not, ala Catalyst,
instead of the controller having to provide an
'allowed_methods' method.

Mark
--373334887-184512681-1230819806=:18271--

Re: a2c controller method names

am 01.01.2009 22:22:03 von Mark Hedges

On Thu, 1 Jan 2009, Mark Hedges wrote:
>
> Regarding your comment about inheritance vs. references -
> something I hadn't thought much about. A) I need to prefix
> all my internal method names with 'a2c_' to stay out of
> the controller namespace. B) You can't have any controller
> subroutines with the same names as anything in the
> Apache2::Request* family, which (slightly) limits your
> choice of allowable URL's. But I think it's worth it so
> that '$self' is the Apache object. What do people think?

Talking to myself again. I think I can make it work either
way. Apache2::Controller won't use Apache2::Request as a
base, but it will still instantiate the Apache2::Request
object and put it in $self->{r}. Then if you want to use
Apache2::Request as a base in your controller module to
access those methods via $self, you can. Otherwise, don't.

Mark

Re: a2c controller method names

am 02.01.2009 17:47:42 von David Ihnen

Mark Hedges wrote:
> On Thu, 1 Jan 2009, Mark Hedges wrote:
>
>> Regarding your comment about inheritance vs. references -
>> something I hadn't thought much about. A) I need to prefix
>> all my internal method names with 'a2c_' to stay out of
>> the controller namespace. B) You can't have any controller
>> subroutines with the same names as anything in the
>> Apache2::Request* family, which (slightly) limits your
>> choice of allowable URL's. But I think it's worth it so
>> that '$self' is the Apache object. What do people think?
>>
>
> Talking to myself again. I think I can make it work either
> way. Apache2::Controller won't use Apache2::Request as a
> base, but it will still instantiate the Apache2::Request
> object and put it in $self->{r}. Then if you want to use
> Apache2::Request as a base in your controller module to
> access those methods via $self, you can. Otherwise, don't.
>

For a compromise between them you could also do the 'fake delegate'
where your AUTOLOAD subroutine checks if $self->{r}->can(($AUTOLOAD =~
/::(.+?)$/)[0]) returns a CODE ref and delegates the call to that routine.

The downside is that you're overlapping namespaces, as you mentioned
before, which has its own complications.

I think you're right, its better to be explicit about choosing the
request object when you want to do a request object method. That works
find unless you're replicating some sort of interface where the methods
you need directly callable are in two diff subclasses and inheritance
doesn't do what you want and explicit delegation (wrapper methods) is
too much of a PITA.

I would make myself a little method

sub r { shift->{'r'} };

just so I could write

$self->r->...

instead of having to do the hash dereference. Of course, you can
program that sort of thing in AUTOLOAD too. ;)

David

Re: a2c controller method names

am 02.01.2009 17:59:43 von David Ihnen

Mark Hedges wrote:
> I wonder how easy it would be to add controller subroutine
> attributes for which ones are allowed or not, ala Catalyst,
> instead of the controller having to provide an
> 'allowed_methods' method.
>
Sounds like something JSON::RPC::Server::CGI does with :private and
:public modifiers on the subroutine definitions, I think they're called
'fields', might be worth a look to see how they did that.

A thought on this matter particularly.

Though JSON::RPC::Server::CGI was just fine about calling my methods
with its dispatch, it could not find the ability to instantiate my
object first. In fact, it always called them in static context with
itself as the first parameter, which I found quite limiting.

This simply didn't work for my object structure and way I wanted to OOP
the program. (I got around it by making a static class that implimented
an allowed_subroutines function that instantiated and effectively
manually delegated calls to the appropriate subroutines, blowing the
nice :public list right into irrelevancy)

It would have been nicer as a dispatching framework if it either A.
would instantiate my object (through a defined interface like
->new($server)) first THEN call the method or B. defined a
hook-handle/decline interface not that different from apache so that I
can custom define how you check for availability (rather than the
:public list or ->can), instantiate if relevant, and call subroutines in
my particular object.

I guess I'm saying consider the interface flexibility as you design the
framework - there may be interest in doing something in the realm of
setup before calling the method.

David

Re: a2c controller method names

am 02.01.2009 18:10:10 von Mark Hedges

On Fri, 2 Jan 2009, David Ihnen wrote:

> Mark Hedges wrote:
> > I wonder how easy it would be to add controller
> > subroutine attributes for which ones are allowed or not,
> > ala Catalyst, instead of the controller having to
> > provide an 'allowed_methods' method.
> >
>
> It would have been nicer as a dispatching framework if it
> either A. would instantiate my object (through a defined
> interface like ->new($server)) first THEN call the method
> or B. defined a hook-handle/decline interface not that
> different from apache so that I can custom define how you
> check for availability (rather than the :public list or
> ->can), instantiate if relevant, and call subroutines in
> my particular object.
>
> I guess I'm saying consider the interface flexibility as
> you design the framework - there may be interest in doing
> something in the realm of setup before calling the method.

I see. I guess that was something like my original thought
of making you provide allowed_methods() which returns an
array. It's not much harder to type than :public
everywhere, gives you a central list ("which methods are
allowed?" is easier to answer from looking at the top of the
package), gives you flexibility/dynamism/inheritance, and
the results get cached by Apache2::Controller::Funk in a
lookup hash.

So I will probably stick with requiring allowed_methods() in
the controller package instead of inventing an elaborate
mechanism to make this "easier." Thanks for your input.

Mark

Re: a2c controller method names

am 02.01.2009 18:23:29 von Mark Hedges

> > On Thu, 1 Jan 2009, Mark Hedges wrote: Talking to myself
> > again. I think I can make it work either way.
> > Apache2::Controller won't use Apache2::Request as a
> > base, but it will still instantiate the Apache2::Request
> > object and put it in $self->{r}. Then if you want to
> > use Apache2::Request as a base in your controller module
> > to access those methods via $self, you can. Otherwise,
> > don't.
>
> For a compromise between them you could also do the 'fake
> delegate' where your AUTOLOAD subroutine checks if
> $self->{r}->can(($AUTOLOAD =~ /::(.+?)$/)[0]) returns a
> CODE ref and delegates the call to that routine.

I don't want to use AUTOLOAD. It adds latency and it
confuses the hell out of me when the method namespace gets
complicated. You're free to use AUTOLOAD in controller
packages though.

> I think you're right, its better to be explicit about
> choosing the request object when you want to do a request
> object method. That works find unless you're replicating
> some sort of interface where the methods you need directly
> callable are in two diff subclasses and inheritance
> doesn't do what you want and explicit delegation (wrapper
> methods) is too much of a PITA.

Yeah, this is why I usually spell things out instead of
using SUPER::. In this case I've discovered it is in fact
important to get the inheritance order right. As long as
Apache2::Request is the last base it comes out okay, new()
then creates the A2C handler object instead of trying to
make an Apache2::Request object.

> I would make myself a little method
>
> sub r { shift->{'r'} };
>
> just so I could write
>
> $self->r->...
>
> instead of having to do the hash dereference. Of course,
> you can program that sort of thing in AUTOLOAD too. ;)

You could do that in some app co-base that you write to
extend your controllers with convenience methods, if you
choose not to use Apache2::Request as a base to suck its
methods into $self. But if you don't choose to do that,
dereferencing $self->{r} is faster than calling a method
that then does the same thing, so I probably won't provide
the convenience method.

Mark