[torqueusers] pbs_sched and one or more many CPU nodes.

Steve Young chemadm at hamilton.edu
Tue Oct 21 07:23:40 MDT 2008


Hi James,
	It shouldn't be a problem to do this. I have a few Altix's with 16  
and 32 cpu's as well as a Beowulf with 4cpu nodes (like you have). You  
could just add it to the list  with the rest of your nodes. However,  
if you add it to the list you could end up getting 4cpu or less jobs  
on it. If it were me I'd put this new node at the end of the  
server_priv/nodes file so that all the 4cpu and less jobs will go to  
other open nodes first. If someone asks for more than 4cpu's then it  
has no choice but to pick this new node since it is the only node to  
have the resources needed. The only real problem will be when all the  
other 4cpu nodes are busy that this new node could get scheduled with  
jobs 4cpu or less. Hope this helps,

-Steve

On Oct 20, 2008, at 4:24 PM, James J Coyle wrote:

> Torque/pbs_sched users,
>
>  I'm managing a cluster and use pbs_sched.
>
>  I have a reasonably large (144 node) homogeneous cluster of 4  
> processor/8GB
> nodes.  It is running well.
>
>  I now need to add at least one, perhaps more nodes which are
> 32 processor/128GB nodes.
>
>  One way is to treat this as a separate (maybe just single node)  
> cluster
> the other is to incorporate it into the existing queuing system.
>
>  Has anyone incorporated nodes with such a large disparity in  
> capability
> into a single PBS structure?
>
>   Ideally one would want only jobs > 4 cpus and < 32 cpus to run on
> this/these nodes.
>
>   Has anyone done this with pbs_sched?
> If so, can you dump the qmgr setting with
> qmgr -c 'p s' > state_file
> and share it with me, and maybe the nodes file if that is also needed?
>
> Thanks,
> -- 
> James Coyle, PhD
> SGI Origin, Xeon and Opteron Cluster Manager
> High Performance Computing Group
> 235 Durham Center
> Iowa State Univ.           phone: (515)-294-2099
> Ames, Iowa 50011           web: http://jjc.public.iastate.edu
>
>
>
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers



More information about the torqueusers mailing list