[torqueusers] Re: SIGTERM and pbsdsh [SOLVED]

Tim Freeman tfreeman at mcs.anl.gov
Fri Jan 4 08:27:07 MST 2008

On Tue, 27 Nov 2007 09:52:36 -0600
Tim Freeman <tfreeman at mcs.anl.gov> wrote:

> I am starting the same executable on N nodes using pbsdsh -n.  During a qdel,
> SIGTERM signals do not look like they are propagating to each process, only a
> SIGKILL from the initial looks of it (there's a SIGTERM handler in the
> executable that is not getting invoked).
> The application I'm running greatly benefits from getting to run a cleanup
> routine if cancelled.  Is there an option to pbsdsh or some technique to use
> where I can make this happen? 
> Thanks,
> Tim

I finally ran more tests and I am happy to report this is not a problem with
Torque, it was more like unexpected behavior (possibly a bug, depending on the
developer's intent).  Sorry to have spent anyone's time without all of the

The issue was that the logger in the application was dying when it received
SIGTERM and tried to report this event :-\  So as I said originally, it looks
like a SIGKILL is what is stopping the application, but this was because my
view into the program at the time was via the logger.

The reason the processes die is because (only when run under pbsdsh) the stdout
pipe is broken at the moment of the signal.

I had logging going to stdout and the logger could not handle the broken pipe
(it does now, it also can log to files-only now).  I discovered this after
running a bare bones signal testing script and retrieving output files from the
nodes after a test (see below).

So there is no issue to report on both Torque 2.1.8 and 2.2.1, save the broken
pipe which happens when using pbsdsh.  That broken pipe was unexpected to
me, but it could be part of the design for all I know.

Below is a report on the behavior of SIGTERM under pbsdsh I found, for the
records.  One thing which may affect some people's applications is that the
process on the same node as pbsdsh receives a SIGTERM first.  Then five seconds
later all the processes (including that one) receive SIGTERM, then the SIGKILL
comes to all.




  - Processes are launched via qsub and "pbsdsh -u" and signals are recorded to
    each local filesystem.  

  - qdel <job handle> is run at some point after the jobs start.

  - After any signal is caught, each process idles again, writing a timestamp
    every half second.  So the timestamps run out when SIGKILL is sent.

Example with four nodes, one process per node.  

Sample command:

echo "pbsdsh -vu <path-to-executable>" | qsub -j oe -r n -m n -l nodes=4:ppn=1 -l walltime=00:05:00 -o testout

Let node01 == node where pbsdsh process is running, as well as one instance of
the signal catching process.

Let node02,node03,node04 == nodes where just the signal catching process runs.

process at node01 gets two SIGTERMs, the other three get one.  The other three get
their first one when process at node01 gets its second.

The delay between the first and second SIGTERM received by process at node01 is 5

The delay from then on to the process dying in process at node01 is the same.
Same delay for the processes on {node02,node03,node04} that only receive one
SIGTERM (when process at node01 receives its second one).

So something like this. 5 columns, S == 'seconds', other 4 columns are for each

S   node01      node02      node03      node04
= =========== =========== =========== ===========

0 | pbsdsh  |
  | program |  <-- SIGTERM
5 | program | | program | | program | | program |  <-- SIGTERM
10 | program | | program | | program | | program |  <-- SIGKILL

pbsdsh -v option shows this:

pbsdsh: spawned task 0
pbsdsh: spawned task 1
pbsdsh: spawned task 2
pbsdsh: spawned task 3
pbsdsh: spawn event returned: 0 (4 spawns and 0 obits outstanding)
pbsdsh: sending obit for task 2
pbsdsh: spawn event returned: 1 (3 spawns and 1 obits outstanding)
pbsdsh: sending obit for task 3
pbsdsh: spawn event returned: 3 (2 spawns and 2 obits outstanding)
pbsdsh: sending obit for task 5
pbsdsh: spawn event returned: 2 (1 spawns and 3 obits outstanding)
pbsdsh: sending obit for task 4

More information about the torqueusers mailing list