[torqueusers] the way node are choosen

Jerry Smith jdsmit at sandia.gov
Wed Oct 4 09:36:36 MDT 2006


The parameters for Maui are,

NODEALLOCATIONPOLICY  FIRSTAVAILABLE # top down through $PBS_HOME/nodes
Or 
NODEALLOCATIONPOLICY  LASTAVAILABLE  # bottom  up $PBS_HOME/nodes

And to limit one job per node is :
NODEACCESSPOLICY        SINGLEJOB

So 3x1 cpu jobs will fall on 3 separate nodes ( leaving 3 idle processors on
each node ),  but 1x4 cpu job will land on a single node.

As for pbs_sched I am in the same boat as Troy, it has been a long time
since I have used it.

Jerry

> From: Troy Baer <troy at osc.edu>
> Organization: Ohio Supercomputer Center
> Date: Wed, 04 Oct 2006 11:17:01 -0400
> To: bill <cluster.bill at alinto.com>
> Cc: <torqueusers at supercluster.org>
> Subject: Re: [torqueusers] the way node are choosen
> 
> On Wed, 2006-10-04 at 17:04 +0200, bill wrote:
>> I have 8 boxes like that in $PBS/server_priv/nodes
>> slave0 np=4
>>   (...)
>> slave7 np=4
>> 
>> If I ask for 3 jobs requesting 1 CPU, all of three will go to slave0.
>> 
>> Is there a way to randomizing CPU allocation? Or walk node by node
>> instead of cpu per cpu?
>> 
>> This cluster is not always heavily loaded, so I could gain some time if
>> each node process only 1 job.
>> 
>> I think this could drive to some problem, 8 job requesting 1 CPU and a
>> lot of job requesting 4ppn waiting, but for me it is acceptable.
> 
> This is largely dependent on what scheduler you use.  All other things
> being equal, pbs_sched tends to load up nodes from the beginning of the
> node list and work down, while Maui and Moab tend to start from the end
> of the list and work up.  I think Maui and Moab both have settings that
> will do what you want; I'm not sure about pbs_sched, because I haven't
> used it in years.
> 
> --Troy
> -- 
> Troy Baer                       troy at osc.edu
> Science & Technology Support    http://www.osc.edu/hpc/
> Ohio Supercomputer Center       614-292-9701
> 
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers
> 




More information about the torqueusers mailing list