[torqueusers] Torque job options

Ricardo Román Brenes roman.ricardo at gmail.com
Fri Feb 15 16:39:10 MST 2013


hello! :-)

you can specify specific hosts with "-l nodes=n001+n002".

that option will request both big nodes :-)

and on a side note if suraci has the pbsserver and has 8 cores, leave 1 or
2 reserved not for torque :-P
On Feb 15, 2013 5:07 PM, "Kamran Khan" <kamran at pssclabs.com> wrote:

> Hi Everyone,****
>
> ** **
>
> This is my first message to the community, so HIIIII!!****
>
> ** **
>
> Below is my torquetest.sh script which I use to test the functionality of
> torque across my cluster.****
>
> ** **
>
> #!/bin/bash****
>
> ** **
>
>      #PBS -l walltime=00:1:00****
>
>      #PBS -l nice=19****
>
>      #PBS -q default****
>
>      #PBS -l nodes=2:ppn=12****
>
> ** **
>
> mpiexec -machinefile /opt/machinelist -np 24 hostname****
>
> ** **
>
> echo "end"****
>
> ** **
>
> ** **
>
> And here is a copy of my pbsnodes output:****
>
> ** **
>
> ** **
>
> suraci****
>
>      state = free****
>
>      np = 8****
>
>      ntype = cluster****
>
>      status = rectime=1360968542,varattr=,jobs=,state=free,netload=
> 2013931431,gres=,loadave=1.00,ncpus=12,physmem=24562968kb,availmem=48500940kb,totmem=49728784kb,idletime=275,nusers=2,nsessions=2,sessions=18767
> 23996,uname=Linux icarus.beowulf6.actainc.com 2.6.32-279.19.1.el6.x86_64
> #1 SMP Sat Nov 24 14:35:28 EST 2012 x86_64,opsys=linux****
>
>      mom_service_port = 15002****
>
>      mom_manager_port = 15003****
>
> ** **
>
> n001****
>
>      state = free****
>
>      np = 12****
>
>      ntype = cluster****
>
>      status =
> rectime=1360968542,varattr=,jobs=,state=free,netload=865146013,gres=,loadave=0.00,ncpus=12,physmem=24562968kb,availmem=49216676kb,totmem=49728784kb,idletime=81117,nusers=0,nsessions=0,uname=Linux
> n001.beowulf6.actainc.com 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24
> 14:35:28 EST 2012 x86_64,opsys=linux****
>
>      mom_service_port = 15002****
>
>      mom_manager_port = 15003****
>
> ** **
>
> n002****
>
>      state = free****
>
>      np = 12****
>
>      ntype = cluster****
>
>      status =
> rectime=1360968542,varattr=,jobs=,state=free,netload=647756235,gres=,loadave=1.00,ncpus=12,physmem=24562968kb,availmem=49226348kb,totmem=49728784kb,idletime=808775,nusers=0,nsessions=0,uname=Linux
> n002.beowulf6.actainc.com 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24
> 14:35:28 EST 2012 x86_64,opsys=linux****
>
>      mom_service_port = 15002****
>
>      mom_manager_port = 15003****
>
> ** **
>
> ** **
>
> As you can see, ‘suraci’ has 8 free processors, while n001 and n002 have
> 12 free processors.  How do I establish these settings in my
> torquetest.sh?  Since I have 2 nodes with 12 and 1 node with 8.****
>
> ** **
>
> Does this make sense?****
>
> ** **
>
> Thanks in advance.****
>
> --****
>
> Kamran Khan****
>
> PSSC Labs****
>
> Support Technician****
>
> ** **
>
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20130215/fe2648b0/attachment-0001.html 


More information about the torqueusers mailing list