[torqueusers] queue resources_min.nodect ignored from routing queue

Garrick Staples garrick at clusterresources.com
Mon Nov 6 09:25:26 MST 2006


On Mon, Nov 06, 2006 at 10:14:01AM -0500, Bill Wichser alleged:
> Environment:
>  pbs_version = 2.0.0p8
>  maui-3.2.6p14
>  linux w/2.6 kernel
> 
> I have 3 queues set up: default, small, and large.
> Default is a routing queue which just sends to small then large.
> 
> small is set up
> set queue small resources_max.nodect = 16
> to allow jobs smaller than 16 nodes to run there.  It is further set in 
> maui to allow no more than 112 nodes to be active.  This functions fine.
> 
> large is set up
> set queue large resources_max.nodect = 128
> set queue large resources_min.nodect = 32
> to require 32 node jobs and better, realizing that a gap exists between 
> 17 and 31 node requests.  This works fine as well.
> 
> What doesn't work is when the small queue is full, ie the maui limit has 
> been reached, and single processor jobs move into the large queue until 
> that too is full.  I expected that Torque would stop this as the job has 
> only requested a single node and therefore would not meet the 
> resources_min.nodect=32 limitation, but jobs just continue to fall into 
> this queue until this fills.
> 
> One note is that these job scripts do NOT specify how many nodes are 
> being requested.  They only specify a wallclocktime.  But in the Torque 
> server itself I do specify a line:
> 
> set server resources_default.nodect = 1
> 
> which should be taking care of this situation.
> 
> 
> Am I simply missing something else or is there perhaps something wrong 
> with this version of the scheduler?

nodect is ment to be calcuated from nodes, so you wouldn't set a default
nodect.  Set a default nodes.

Also, maui configs don't influence job routing in TORQUE.



More information about the torqueusers mailing list