[torqueusers] pbs on single machine

James A. Peltier jpeltier at cs.sfu.ca
Wed Jan 9 15:35:41 MST 2008


James A. Peltier wrote:
> James J Coyle wrote:
>> Jan,
>>
>> Try
>> /usr/local/bin/mpirun -np 8 ./test > ./test.log
>>
>> instead of
>> /usr/local/bin/mpirun ./test > ./test.log
>>
>>
>> This is the same command you;d need if you ran interactively.
>>
>> - James Coyle, PhD
>>  SGI Origin, Alpha, Xeon and Opteron Cluster Manager
>>  High Performance Computing Group      235 Durham Center            
>>  Iowa State Univ.           phone: (515)-294-2099
>>  Ames, Iowa 50011           web: http://jjc.public.iastate.edu
>> -----------------------------------------------Oo
>>
>> Hi all,
>>
>> I have one 8 core machine (2 quad core cpus) running linux (named
>> "headnode"). I would like to use PBS to run MPI jobs in it. I set up PBS
>> according to the mini howto. I told pbs that "headnode" is the server
>> and also the one node in the system. In the nodes file I entered
>> headnode np=8
>>
>> Everything seems to work.
>> Running pbsnodes -a gives me:
>> headnode
>>      state = free
>>      np = 8
>>      ntype = cluster
>>      status = opsys=linux,uname=Linux
>> and a bunch more system info. I created a default queue and I can submit
>> jobs. The problem is, when I submit a jobscript like:
>>
>> #PBS -l nodes=1:ppn=8
>> #PBS -l walltime=96:00:00
>> #PBS -j oe
>> cd $PBS_O_WORKDIR
>> echo $PBS_O_WORKDIR
>> /usr/local/bin/mpirun ./test > ./test.log
>>
>> Everything seems to work fine, but when I look at the processes running
>> (e.g. with top), only one copy of "test" runs on the machine. "test" is
>> an executable usimg fortran and MPI.
>>
>> I am wondering if there is anything obvious I missed in the
>> configuration. As far as I understand, I can set up a PBS system
>> consisting of a single machine with 8 cores. Any insight would be
>> greatly appreciated.
>>
>> Thanks a lot, Jan
>>
>> _______________________________________________
>> torqueusers mailing list
>> torqueusers at supercluster.org
>> http://www.supercluster.org/mailman/listinfo/torqueusers
> 
> If you are using Open-MPI you can compile Open-MPI with the --with-tm 
> option which will eliminate the need to specify -np # as it is read 
> automatically from the $PBS_NODEFILE variable and started accordingly.
> 

Sorry clicked send by accident.  --with-tm compiles Open-MPI with the 
Torque resource manager capabilities which is really nice. :)

-- 
James A. Peltier
Technical Director, RHCE
SCIRF | GrUVi @ Simon Fraser University - Burnaby Campus
Phone   : 778-782-3610
Fax     : 778-782-3045
Mobile  : 778-840-6434
E-Mail  : jpeltier at cs.sfu.ca
Website : http://gruvi.cs.sfu.ca | http://scirf.cs.sfu.ca
MSN     : subatomic_spam at hotmail.com


More information about the torqueusers mailing list