[torqueusers] strange behaviour of ppn

Govind Songara govind.songara at rhul.ac.uk
Fri Nov 12 09:25:45 MST 2010


Hi,


I am not expert on torque configurations, so might something wrong with
configurations.
I am seeing a strange behaviour of ppn variable.
My nodes config is something like
node01 np=4
 node02 np=4

snippet of maui config
JOBNODEMATCHPOLICY     EXACTNODE
ENABLEMULTINODEJOBS   TRUE
NODEACCESSPOLICY         SHARED


snippet of queue config
        resources_available.nodect = 65
        resources_assigned.nodect = 5
        resources_default.nodes = 1

sample script
------------------------------------
#PBS -q long
#PBS -l nodes=2:ppn=1

echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
NPROCS=`wc -l < $PBS_NODEFILE`
echo This job has allocated $NPROCS processors
hostname
------------------------------------

Below is my result in the tables

nodes

ppn

no. process run (hostname)

no. pf processor allocated

3

1

1

3

3

2

1

2

3

3

1

3

3

4

1

4
  In case 1, it gives 3 processor on same node which is incorrect, it should
give 1 processor on 3 different nodes
In case2, it give only 2 processor on same node, it should 2 processor on 3
different nodes (total 6 processor) and similar behaviour with the last tow
cases.
In all the cases the hostname command run only once, which should run at
least on total number of allocated processors.


Due to this strange behaviour i can not run mpi jobs correctly, kindly
advise on this problem.

TIA

Regards
Govind
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20101112/57931124/attachment.html 


More information about the torqueusers mailing list