Ensuring parent/child processes
Ensuring parent/child processes
am 06.09.2007 18:14:15 von jrpfinch
Below is a script that represents a much longer/complicated script
that takes up a large amount of memory. I understand that the
approved way of forking involves the parent process _just_ managing
the forking and having all the actual program-related stuff done by
the children.
Memory constraints mean I would like one parent and one child both to
do 'program-related stuff'. I would like to ensure that if either the
child or the parent die unexpectedly, then the other process dies
too.
I have read perlipc and would like to know what the approved way to do
this is. I am thinking two solutions - 1. have a die signal handler
kill the other process 2. use big eval blocks to trap any unexpected
errors and kill the other process.
Which is best? Is there a better way?
Cheers
Jon
#!/opt/perl5.8.8/bin/perl
#----------------------------------------------------------- --------------------
# Interpreter settings
#----------------------------------------------------------- --------------------
use warnings;
use strict;
$SIG{CHLD}='IGNORE';
my $gPid;
print "Parent process pid=$$\n";
unless (defined ($gPid = fork))
{
die "cannot fork: $!";
}
#----------------------------------------------------------- --------------------
# Child process loops for 50 seconds
# How to ensure this stops if the parent process dies unexpectedly?
#----------------------------------------------------------- --------------------
unless ($gPid)
{
print "Inside unless pid=$$, gpid=$gPid\n";
for (0..10)
{
sleep 5;
}
print "About to exit inside unless\n";
# i am the child
exit;
}
#----------------------------------------------------------- --------------------
# Parent process loops for 20 seconds and then dies unexpectedly
#----------------------------------------------------------- --------------------
print "After unless gpid=$gPid\n";
for (0..10)
{
sleep 2;
} # i am the parent
print "About to die after unless";
die "dying after unless";
print "About to waitpid after unless\n";
waitpid($gPid, 0);
exit;
__END__
Re: Ensuring parent/child processes
am 06.09.2007 20:02:23 von xhoster
jrpfinch wrote:
> Below is a script that represents a much longer/complicated script
> that takes up a large amount of memory. I understand that the
> approved way of forking involves the parent process _just_ managing
> the forking and having all the actual program-related stuff done by
> the children.
I don't think Perl users usually buy into having a single approved way of
doing things. Or at least I certainly don't. How you use fork depends on
what you are using fork to accomplish.
> Memory constraints mean I would like one parent and one child both to
> do 'program-related stuff'.
There is enough here to pique my interest but not enough to know what to
suggest. Perhaps you can take advantage of Copy-on-write, where parent
and child automatically share the same memory as long as neither write to
it. Anyway, this kind of sounds like letting the tail wag the dog. Are
you sure it doesn't make more sense to diddle around with the memory
situation rather than with the forking situation? If you can have one
bloated parent and one bloated child, why not one small parent and two
bloated children? The parent could wait on any child, and then kill the
other. Or you could have a slim parent and a bloated child and a bloated
grandchild. If the grandchild dies unexpectedly, the middle one will get a
SIG{CHLD} and can kill itself. If the middle one dies unexpected, the
super-parent will get a SIG{CHLD} (or just return from a blocking waitpid)
and can then kill the whole process group.
> I would like to ensure that if either the
> child or the parent die unexpectedly, then the other process dies
> too.
>
> I have read perlipc and would like to know what the approved way to do
> this is. I am thinking two solutions - 1. have a die signal handler
> kill the other process 2. use big eval blocks to trap any unexpected
> errors and kill the other process.
That would work if your "unexpected" errors happen only in expected ways.
What if you get blown out of the water by an untrapped or untrappable
signal? Neither signal handlers nor eval blocks can then kill off the
partner. Is that tolerable or not?
If you have to detect even truly unexpected deaths, then one way would
be to open a pipe between parent and child, and never close or write to the
pipe. If the pipe becomes readable (as detected by select or IO::Select)
then the other party must have gone away. Alas this requires polling on
the part of the survivor.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service $9.95/Month 30GB
Re: Ensuring parent/child processes
am 06.09.2007 23:45:52 von Charles DeRykus
On Sep 6, 9:14 am, jrpfinch wrote:
> Below is a script that represents a much longer/complicated script
> that takes up a large amount of memory. I understand that the
> approved way of forking involves the parent process _just_ managing
> the forking and having all the actual program-related stuff done by
> the children.
>
> Memory constraints mean I would like one parent and one child both to
> do 'program-related stuff'. I would like to ensure that if either the
> child or the parent die unexpectedly, then the other process dies
> too.
>
> I have read perlipc and would like to know what the approved way to do
> this is. I am thinking two solutions - 1. have a die signal handler
> kill the other process 2. use big eval blocks to trap any unexpected
> errors and kill the other process.
>
> Which is best? Is there a better way?
>
> Cheers
>
> Jon
>
> #!/opt/perl5.8.8/bin/perl
> #----------------------------------------------------------- --------------------
> # Interpreter settings
> #----------------------------------------------------------- --------------------
> use warnings;
> use strict;
> $SIG{CHLD}='IGNORE';
> my $gPid;
> print "Parent process pid=$$\n";
> unless (defined ($gPid = fork))
> {
> die "cannot fork: $!";}
>
> #----------------------------------------------------------- --------------------
> # Child process loops for 50 seconds
> # How to ensure this stops if the parent process dies unexpectedly?
> #----------------------------------------------------------- --------------------
> unless ($gPid)
> {
> print "Inside unless pid=$$, gpid=$gPid\n";
> for (0..10)
> {
> sleep 5;
> }
> print "About to exit inside unless\n";
> # i am the child
> exit;
>
> }
>
> #----------------------------------------------------------- --------------------
> # Parent process loops for 20 seconds and then dies unexpectedly
> #----------------------------------------------------------- --------------------
> print "After unless gpid=$gPid\n";
> for (0..10)
> {
> sleep 2;} # i am the parent
>
> print "About to die after unless";
> die "dying after unless";
>
> print "About to waitpid after unless\n";
> waitpid($gPid, 0);
> exit;
>
> __END__
Note: I'm not sure why you waitpid with SIGCHLD set to 'IGNORE'...?
But, to get sychronized deaths, the parent/child pair could
cross signal one another via an END block (which simulates an
atexit() call). Exiting this way may short-circuit cleanup or
other processing though so you may need something more elaborate
than just an 'exit' in a USR1 signal handler.
[ Untested ]
use warnings;
use strict;
use sigtrap qw(die normal-signals);
use POSIX qw(_exit);
$SIG{CHLD}='IGNORE';
$SIG{USR1} = sub { POSIX::_exit(1); }; # assumes USR1 unused
my $parent_pid = $$;
my $child_pid = fork ...
....
END { kill 'USR1', $$ == $parent_pid ? $child_pid : $parent_pid; }
__END__
--
Charles DeRykus
Re: Ensuring parent/child processes
am 07.09.2007 10:27:12 von jrpfinch
On 6 Sep, 19:02, xhos...@gmail.com wrote:
[snip]
> If you can have one
> bloated parent and one bloated child, why not one small parent and two
> bloated children? The parent could wait on any child, and then kill the
> other. Or you could have a slim parent and a bloated child and a bloated
> grandchild. If the grandchild dies unexpectedly, the middle one will get a
> SIG{CHLD} and can kill itself. If the middle one dies unexpected, the
> super-parent will get a SIG{CHLD} (or just return from a blocking waitpid)
> and can then kill the whole process group.
>
These sound like good solutions - however, the processes are bloated
because they both use many external modules. Is there any way to have
the external modules only loaded by the children? And have each child
only load the modules they need?
[snip]
> > I have read perlipc and would like to know what the approved way to do
> > this is. I am thinking two solutions - 1. have a die signal handler
> > kill the other process 2. use big eval blocks to trap any unexpected
> > errors and kill the other process.
>
> That would work if your "unexpected" errors happen only in expected ways.
> What if you get blown out of the water by an untrapped or untrappable
> signal? Neither signal handlers nor eval blocks can then kill off the
> partner. Is that tolerable or not?
Not really. I would like to ensure that they both die together in all
possible circumstances (or at least as many as possible).
> If you have to detect even truly unexpected deaths, then one way would
> be to open a pipe between parent and child, and never close or write to the
> pipe. If the pipe becomes readable (as detected by select or IO::Select)
> then the other party must have gone away. Alas this requires polling on
> the part of the survivor.
I will do this if I can't get the slim grandparent/parent solution
working.
Many thanks
Jon
Re: Ensuring parent/child processes
am 07.09.2007 11:58:46 von jrpfinch
On 7 Sep, 09:27, jrpfinch wrote:
> On 6 Sep, 19:02, xhos...@gmail.com wrote:
> [snip]
>
> > If you can have one
> > bloated parent and one bloated child, why not one small parent and two
> > bloated children? The parent could wait on any child, and then kill the
> > other. Or you could have a slim parent and a bloated child and a bloated
> > grandchild. If the grandchild dies unexpectedly, the middle one will get a
> > SIG{CHLD} and can kill itself. If the middle one dies unexpected, the
> > super-parent will get a SIG{CHLD} (or just return from a blocking waitpid)
> > and can then kill the whole process group.
>
> These sound like good solutions - however, the processes are bloated
> because they both use many external modules. Is there any way to have
> the external modules only loaded by the children? And have each child
> only load the modules they need?
>
> [snip]
With regards this solution - I think I can do it if I use the dynamic
loading method detailed in this post:
http://groups.google.co.uk/group/comp.lang.perl.misc/browse_ thread/thread/94ccc975cc704df3/3734dbe392990614?lnk=gst&q=dy namic+load+modules&rnum=11#3734dbe392990614
Dynamically loading modules in the children seems to considerably
reduce the memory used by the parent.
Re: Ensuring parent/child processes
am 07.09.2007 13:22:32 von jrpfinch
How does this look? The only thing I am worried about is if another
external process happens to inherit the pid of one of the processes I
am about to kill (i.e. it has died already). Is there any way of
making sure the kill -9 is killing a child process and not killing
some other process (this is obviously tiny likelihood but you never
know!)
#!/opt/perl5.8.8/bin/perl
#----------------------------------------------------------- --------------------
# Interpreter settings
#----------------------------------------------------------- --------------------
use warnings;
use strict;
my $childOnePid;
my $childTwoPid;
my $diePid;
print "Parent process pid=$$\n";
unless (defined ($childOnePid = fork))
{
die "cannot fork: $!";
}
#----------------------------------------------------------- --------------------
# Child process loops for 50 seconds
# How to ensure this stops if the parent process dies unexpectedly?
#----------------------------------------------------------- --------------------
unless ($childOnePid)
{
print "Inside childOne pid=$$\n";
my @packages = ("Schedule::Cron",
"SOAP::Lite",
"Log::Log4perl",
"MetaMon::MetaMonConfigLoader",
"MetaMon::PhaseOne",
"MetaMon::GetEnvVar",
"Data::Dumper");
my $package;
eval {
require "PAR.pm";
import PAR q(/opt/perl5.8.8/lib/site_perl/*.par);
for $package (@packages)
{
(my $pkg = $package) =~ s|::|/|g; # require need a path
require "$pkg.pm";
import $package;
}
};
die $@ if( $@ );
for (0..10)
{
sleep 1;
}
print "About to exit childOne\n";
exit;
}
unless (defined ($childTwoPid = fork))
{
die "cannot fork: $!";
}
#----------------------------------------------------------- --------------------
# Child process loops for 50 seconds
# How to ensure this stops if the parent process dies unexpectedly?
#----------------------------------------------------------- --------------------
unless ($childTwoPid)
{
print "Inside childTwo pid=$$\n";
my @packages = ("Schedule::Cron",
"SOAP::Lite",
"Log::Log4perl",
"Data::Dumper");
my $package;
eval {
for $package (@packages)
{
(my $pkg = $package) =~ s|::|/|g; # require need a path
require "$pkg.pm";
import $package;
}
};
die $@ if( $@ );
for (0..12)
{
sleep 1;
}
print "About to exit childTwo\n";
exit;
}
sleep 10;
$diePid = wait();
if ($diePid == $childOnePid)
{
print "Killing child two pid\n";
kill 9, $childTwoPid;
}
elsif ($diePid == $childTwoPid)
{
print "Killing child one pid\n";
kill 9, $childOnePid;
}
elsif ($diePid == -1)
{
#both already dead
sleep 0;
}
else
{
print "Should never get here\n";
}
print "finished\n";
__END__
Re: Ensuring parent/child processes
am 07.09.2007 20:34:33 von xhoster
jrpfinch wrote:
> How does this look? The only thing I am worried about is if another
> external process happens to inherit the pid of one of the processes I
> am about to kill (i.e. it has died already). Is there any way of
> making sure the kill -9 is killing a child process and not killing
> some other process (this is obviously tiny likelihood but you never
> know!)
You could have the parent kill itself with a negative signal:
kill -15, $$;
This should kill all of it's children and grandchildren (as long as they
haven't called POSIX::setsid) without having to worry about the specific
pids of the children. But this may be OS dependent. It works for me on
linux. Also, I wouldn't use 9 to kill something unless there is a good
reason to. 15 is usually good enough.
> eval {
> require "PAR.pm";
> import PAR q(/opt/perl5.8.8/lib/site_perl/*.par);
> for $package (@packages)
> {
> (my $pkg = $package) =~ s|::|/|g; # require need a path
> require "$pkg.pm";
> import $package;
> }
>
> };
It looks to me (again, on my system) like most of the bulk of PAR.pm is due
to the shared libraries it loads. This data is shared among all processes,
so there really isn't much to be gained by reducing the number of processes
that load PAR simultaneously.
Xho
--
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service $9.95/Month 30GB
Re: Ensuring parent/child processes
am 08.09.2007 00:16:28 von Anno Siegel
On 2007-09-07 10:27:12 +0200, jrpfinch said:
> On 6 Sep, 19:02, xhos...@gmail.com wrote:
> [snip]
>> If you can have one
>> bloated parent and one bloated child, why not one small parent and two
>> bloated children? The parent could wait on any child, and then kill the
>> other. Or you could have a slim parent and a bloated child and a bloated
>> grandchild. If the grandchild dies unexpectedly, the middle one will get a
>> SIG{CHLD} and can kill itself. If the middle one dies unexpected, the
>> super-parent will get a SIG{CHLD} (or just return from a blocking waitpid)
>> and can then kill the whole process group.
>>
>
> These sound like good solutions - however, the processes are bloated
> because they both use many external modules. Is there any way to have
> the external modules only loaded by the children? And have each child
> only load the modules they need?
Sure. You can require() the modules after the kids are running, doing
necessary calls to ->import explicitly. You may have to add some parens
to calls of imported functions.
If that doesn't work out (some modules must be use()d, require() won't do),
you can create independent script(s) for the kids and exec them.
[...]
> Not really. I would like to ensure that they both die together in all
> possible circumstances (or at least as many as possible).
That essenitally calls for an independent (small, third) supervisor process,
which could be the common parent. After forking, it could simply wait for
any SIGCHLD, kill both kids for good measure and quit.
Anno
Re: Ensuring parent/child processes
am 10.09.2007 11:06:38 von jrpfinch
Thanks very much for everyone's help. My final solution was based on
the script in my previous message.
I replaced the kill 9's with kill -15, $$ and took the require/import
PAR out of the eval block and put it at the top of the script
Jon