[torqueusers] Making full use of node resources
sheen at usc.edu
Mon Aug 13 16:53:48 MDT 2007
Thank you to all who replied. I actually managed to solve the problem.
For posterity, the reason was because I was running jobs in the
default queue, "batch," which has resources_default:nodes = 1, so by
default it allocates an entire node to every job run on it. If this
attribute is removed, or if the job is run in a queue that does not
have this default set, the problem (naturally) vanishes.
On 8/13/07, Michael Gutteridge <mgutteri at fhcrc.org> wrote:
> On Mon, 2007-08-13 at 13:22 -0700, James A. Peltier wrote:
> > David Sheen wrote:
> > > solomon home/admin1> pbsnodes -a
> > > node2
> > > state = free
> > > np = 2
> > > ntype = cluster
> > > status = opsys=linux,uname=Linux node2 22.214.171.124-34-default #1 SMP
> > > Mon Nov 27 11:46:27 UTC 2006 x86_64,sessions=4082
> > > 7608,nsessions=2,nusers=1,idletime=311603,totmem=4035760kb,availmem=3934672kb,physmem=1931288kb,ncpus=2,loadave=0.00,gres=solomon:,netload=343554498,state=free,jobs=?
> > > 0,rectime=1187036201
> > >
> > > (All the other nodes have identical entries.)
> > >
> > It would seem that you should be able to submit those 16 jobs without
> > issue. Do you have any class definitions in Maui that might limit the
> > number of jobs per default,class,etc?
> What's the setting of NODEACCESSPOLICY? Shared appears to be default
> for Maui...
More information about the torqueusers