[torqueusers] Torque job options

Ricardo Román Brenes roman.ricardo at gmail.com
Mon Feb 18 10:16:12 MST 2013


if you want to use suraci, you cant ask for ppn=12 =(

Sorry...

you can ask for all nodes with 8 cores. and that would get you the 3
machines reserved, there yo ucan filter by hostname or IP which ones have
12 cores and use the 12, and use just 8 in suraci in you code or bash
script.

On Mon, Feb 18, 2013 at 11:05 AM, Kamran Khan <kamran at pssclabs.com> wrote:

> Hi Ricardo,****
>
> ** **
>
> Yeah, suraci is the pbsserver and has 4 cores not reserved for torque.****
>
> ** **
>
> So I can specify ‘#PBS -l nodes=n001+n002:ppn=12‘, but can I also specify
> suraci?****
>
> ** **
>
> That’s where I am getting a little confused because I have the 2 compute
> nodes with 12 available cores and the head node with only 8 available cores.
> ****
>
> ** **
>
> Is it at all possible to tell specific systems to use a specific amount of
> cores, or would I just have to lower the amount of cores being used on each
> system to 8?  Such as:****
>
> ** **
>
> PBS -l nodes=3:ppn=8****
>
> ** **
>
> Thanks in advance for your support.****
>
> --****
>
> Kamran Khan****
>
> PSSC Labs****
>
> Support Technician****
>
> ** **
>
> *From:* torqueusers-bounces at supercluster.org [mailto:
> torqueusers-bounces at supercluster.org] *On Behalf Of *Ricardo Román Brenes
> *Sent:* Friday, February 15, 2013 3:39 PM
> *To:* Torque Users Mailing List
> *Subject:* Re: [torqueusers] Torque job options****
>
> ** **
>
> hello! :-)****
>
> you can specify specific hosts with "-l nodes=n001+n002".****
>
> that option will request both big nodes :-)****
>
> and on a side note if suraci has the pbsserver and has 8 cores, leave 1 or
> 2 reserved not for torque :-P****
>
> On Feb 15, 2013 5:07 PM, "Kamran Khan" <kamran at pssclabs.com> wrote:****
>
> Hi Everyone,****
>
>  ****
>
> This is my first message to the community, so HIIIII!!****
>
>  ****
>
> Below is my torquetest.sh script which I use to test the functionality of
> torque across my cluster.****
>
>  ****
>
> #!/bin/bash****
>
>  ****
>
>      #PBS -l walltime=00:1:00****
>
>      #PBS -l nice=19****
>
>      #PBS -q default****
>
>      #PBS -l nodes=2:ppn=12****
>
>  ****
>
> mpiexec -machinefile /opt/machinelist -np 24 hostname****
>
>  ****
>
> echo "end"****
>
>  ****
>
>  ****
>
> And here is a copy of my pbsnodes output:****
>
>  ****
>
>  ****
>
> suraci****
>
>      state = free****
>
>      np = 8****
>
>      ntype = cluster****
>
>      status = rectime=1360968542,varattr=,jobs=,state=free,netload=
> 2013931431,gres=,loadave=1.00,ncpus=12,physmem=24562968kb,availmem=48500940kb,totmem=49728784kb,idletime=275,nusers=2,nsessions=2,sessions=18767
> 23996,uname=Linux icarus.beowulf6.actainc.com 2.6.32-279.19.1.el6.x86_64
> #1 SMP Sat Nov 24 14:35:28 EST 2012 x86_64,opsys=linux****
>
>      mom_service_port = 15002****
>
>      mom_manager_port = 15003****
>
>  ****
>
> n001****
>
>      state = free****
>
>      np = 12****
>
>      ntype = cluster****
>
>      status =
> rectime=1360968542,varattr=,jobs=,state=free,netload=865146013,gres=,loadave=0.00,ncpus=12,physmem=24562968kb,availmem=49216676kb,totmem=49728784kb,idletime=81117,nusers=0,nsessions=0,uname=Linux
> n001.beowulf6.actainc.com 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24
> 14:35:28 EST 2012 x86_64,opsys=linux****
>
>      mom_service_port = 15002****
>
>      mom_manager_port = 15003****
>
>  ****
>
> n002****
>
>      state = free****
>
>      np = 12****
>
>      ntype = cluster****
>
>      status =
> rectime=1360968542,varattr=,jobs=,state=free,netload=647756235,gres=,loadave=1.00,ncpus=12,physmem=24562968kb,availmem=49226348kb,totmem=49728784kb,idletime=808775,nusers=0,nsessions=0,uname=Linux
> n002.beowulf6.actainc.com 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24
> 14:35:28 EST 2012 x86_64,opsys=linux****
>
>      mom_service_port = 15002****
>
>      mom_manager_port = 15003****
>
>  ****
>
>  ****
>
> As you can see, ‘suraci’ has 8 free processors, while n001 and n002 have
> 12 free processors.  How do I establish these settings in my
> torquetest.sh?  Since I have 2 nodes with 12 and 1 node with 8.****
>
>  ****
>
> Does this make sense?****
>
>  ****
>
> Thanks in advance.****
>
> --****
>
> Kamran Khan****
>
> PSSC Labs****
>
> Support Technician****
>
>  ****
>
>
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers****
>
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20130218/18869ca8/attachment-0001.html 


More information about the torqueusers mailing list