[torqueusers] Torque Ignores PIDS of MPI processes

Joshua Bernstein jbernstein at penguincomputing.com
Mon Jul 23 05:22:01 MDT 2007

Hello All,

     I'm having an issue where it appears that TORQUE, when attempting 
to cancel an MPI job, only seems to kill only two out of the six 
processes that are running. For example I have a job script as follows:

#PBS -N cpuhog
#PBS -j oe
/usr/bin/mpirun -np 4 ./cpuhog01 256 0:10:00
echo "DONE"

This ends up in 6 processes being started by TORQUE's pbs_mom. Two of 
them are "bash", and the four others are the "cpuhog01" processes 
started by mpirun. Now when I attempt to kill or cancel this job, TORQUE 
successfully kills the two "bash" processes but keeps around the four 
other "cpuhog01" processes.

I'm aware of mpiexec for starting jobs under TORQUE, but I'm wondering 
how exactly mpiexec is able to tell TORQUE about these extra processes 
and why TORQUE isn't aware of them on its own. I know there is this TM 
Interface. Is this documented anywhere? Why isn't TORQUE able to do this 
on its own.

In my mind, a job is really a shell script that gets started by a 
pbs_mom. If I call mpirun from inside that job, which then forks new 
processes, TORQUE should be able to track which PIDs a particular job 
has spawned by looking at the PPID to PID relationship.

I'd really like to avoid having to use mpiexec, as out mpirun is already 
optimized for starting jobs on remote nodes without the use of rsh/ssh, 
instead it takes advantage of BProc. Our version of MPICH uses bproc to 
fork processes on the remote nodes. It therefore doesn't require rsh or ssh.

Any insight as to how TORQUE becomes aware of what PIDS to kill when a 
Job is killed would be helpful as well as any other discussion along 
these lines.

It might be worth mentioning this is Linux x86_64/CentOS 4.5. Thanks!

-Joshua Bernstein
Software Engineer
Penguin Computing

More information about the torqueusers mailing list