how to close a stalled file descriptor?

how to close a stalled file descriptor?

am 29.10.2007 10:52:56 von eponymousalias

I'm having trouble closing a file descriptor on a stalled named pipe.
To unblock myself if the write takes too long because the pipe is full
and there is no reader on the other end of the pipe, I put in place
a signal handler. But if the signal handler is invoked and I regain
control, on any subsequent attempt to close the file descriptor,
it stalls again! How do I force the file descriptor to close so I
don't have an infinite buildup of file descriptors in my process,
if I want to continue operating? Even a simple "die" or "exit"
won't work because it tries to close the file descriptor and hangs!

Re: how to close a stalled file descriptor?

am 30.10.2007 00:03:26 von Ben Morrow

Quoth Glenn :
> I'm having trouble closing a file descriptor on a stalled named pipe.
> To unblock myself if the write takes too long because the pipe is full
> and there is no reader on the other end of the pipe, I put in place
> a signal handler.

Which OS are you using? Under most Unices, a writing to a full pipe with
no readers will first send SIGPIPE, and if that is handled or ignored,
fail with EPIPE.

> But if the signal handler is invoked and I regain
> control, on any subsequent attempt to close the file descriptor,
> it stalls again!

When you say 'file descriptor' do you mean 'Perl file handle'? Which
perl version are you using? Are you using PerlIO? It's possible that you
are closing a Perl FH, which is attempting to flush its buffers and
handling the error badly. If this is the case, then you may be able to
ignore (and lose) the buffer by pushing a :unix layer before you close.

Ben

Re: how to close a stalled file descriptor?

am 30.10.2007 13:35:05 von eponymousalias

On Oct 29, 3:03 pm, Ben Morrow wrote:
> Quoth Glenn :
>
> > I'm having trouble closing a file descriptor on a stalled named pipe.
> > To unblock myself if the write takes too long because the pipe is full
> > and there is no reader on the other end of the pipe, I put in place
> > a signal handler.
>
> Which OS are you using? Under most Unices, a writing to a full pipe with
> no readers will first send SIGPIPE, and if that is handled or ignored,
> fail with EPIPE.

I'm testing this on Red Hat Linux 4. But bear in mind that named
pipes
differ from anonymous pipes in one important aspect. A named pipe can
be written to before any reader has attached to the pipe. In that
case,
no SIGPIPE or EPIPE will arise. Those error conditions only arise if
an
attached reader closes the pipe. Without such a reader in the first
place,
there's no such feedback, and all I have is my own SIGALRM timeout to
tell
that the data is not being read.

> > But if the signal handler is invoked and I regain
> > control, on any subsequent attempt to close the file descriptor,
> > it stalls again!
>
> When you say 'file descriptor' do you mean 'Perl file handle'? Which
> perl version are you using? Are you using PerlIO? It's possible that you
> are closing a Perl FH, which is attempting to flush its buffers and
> handling the error badly. If this is the case, then you may be able to
> ignore (and lose) the buffer by pushing a :unix layer before you close.

Yes, I mean 'Perl file handle'. I'm running v5.8.8. I didn't know
about
PerlIO, so I'm not explicitly using it, but "perldoc open" says it's
now
the default IO system.

Yes, it seems like the Perl FH, in trying to flush its buffers, is
continuing the write operation (a large write to a small pipe) and
therefore hanging (there's no error condition to be handled).

I hadn't known about these i/o disciplines. Opening the pipe in the
first place with an explicit :unix discipline (and thereby not
stacking
whatever other buffering would normally be induced on the i/o stream)
has cured the problem. Thanks!!