[torqueusers] torque not listening to ppn request specs

DuChene, StevenX A stevenx.a.duchene at intel.com
Tue Oct 25 18:10:13 MDT 2011

Hello all:
I have torque 2.5.7 and maui 3.2.6p21 installed on a couple of small clusters and I am submitting the following mpi job using:

qsub -l nodes=12:mynode:ppn=1 script_noarch.pbs

this script is very simple as it only has one line in it to invoke the call to mpirun

mpirun --machinefile $PBS_NODEFILE /home/myuser/mpi_test/mpi_hello_hostname

The actual source to this is also very simple:

#include <mpi.h>
#include <stdio.h>

int main(int argc, char **argv)
  int *buf, i, rank, nints, len;
  char hostname[256];

  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  printf("Hello world!  I am process number: %d on host %s\n", rank, hostname);
  return 0;

When I run this with the ppn=1 specification I would expect one processer per node spread over twelve nodes but when I look at my output file I see it is running multiple processes per node instead. So as a result I do not see the output from twelve unique nodes as I would expect.

My nodes file has the following sorts of entries:

enode01 np=4 mynode
enode02 np=4 mynode
enode03 np=4 mynode
enode04 np=4 mynode
enode05 np=4 mynode
enode06 np=4 mynode
enode07 np=4 mynode
enode08 np=4 mynode
enode09 np=4 mynode
enode10 np=4 mynode
enode11 np=4 mynode
enode12 np=4 mynode

I know I can remove the np=4 from each node specification and get the one process per node but I was under the impression that I could use the ppn=1 or whatever to get the same thing.

Am I misunderstanding or overlooking something?
Steven DuChene
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20111025/ad2883d3/attachment.html 

More information about the torqueusers mailing list