[torqueusers] queue resources_min.nodect ignored from routing queue

Bill Wichser bill at Princeton.EDU
Mon Nov 6 08:14:01 MST 2006

  pbs_version = 2.0.0p8
  linux w/2.6 kernel

I have 3 queues set up: default, small, and large.
Default is a routing queue which just sends to small then large.

small is set up
set queue small resources_max.nodect = 16
to allow jobs smaller than 16 nodes to run there.  It is further set in 
maui to allow no more than 112 nodes to be active.  This functions fine.

large is set up
set queue large resources_max.nodect = 128
set queue large resources_min.nodect = 32
to require 32 node jobs and better, realizing that a gap exists between 
17 and 31 node requests.  This works fine as well.

What doesn't work is when the small queue is full, ie the maui limit has 
been reached, and single processor jobs move into the large queue until 
that too is full.  I expected that Torque would stop this as the job has 
only requested a single node and therefore would not meet the 
resources_min.nodect=32 limitation, but jobs just continue to fall into 
this queue until this fills.

One note is that these job scripts do NOT specify how many nodes are 
being requested.  They only specify a wallclocktime.  But in the Torque 
server itself I do specify a line:

set server resources_default.nodect = 1

which should be taking care of this situation.

Am I simply missing something else or is there perhaps something wrong 
with this version of the scheduler?


More information about the torqueusers mailing list