[torqueusers] Problem with nodes allocation

Jen aquarijen at gmail.com
Sun Dec 21 19:40:29 MST 2008

THANK YOU SOOOOOO MUCH.  I could hug you all.
I know this is a little old, but I don't read every single email coming
across the torque list every day.  I've been working on upgrading this
system (a large federated cluster, lots of queues, doesn't need to be down
long, etc) going on 20 straight hours now and I'm really really tired and
coming across this email was just so nice.  It spared what little hair I had
left on my head, too.
Now I just have to figure out what changed with the acl_hosts and I'll be in
business (there is peanut butter int he chocolate now, maybe it will be
something simple like a depricated option?  Pretty Please?  For Hanuka?)

Thanks again!
-Jennifer Tippens
Sleepy Admin, ORNL Institutional Clusters

On Fri, Jul 4, 2008 at 4:46 AM, Roger Williams <R.Williams at gns.cri.nz>wrote:

> Thanks to some excellent (off-list) diagnosis from Glen Beane, the problem
> of "only one node" allocation has been identified as being provoked by
> these statements in my server and queue setup:
>  qmgr -c 'set server resources_available.nodect = 999999'
>  qmgr -c 'set queue batch resources_available.nodect = 999999'
> This configuration (which comes from many torque.setup sample scripts to
> be found on the net) is seemingly wrong and/or a problem with newer
> versions of Torque. If you omit (or unset) the resource, then node
> allocation behaves as it should.
> According to Glen, "this setting was actually removed from the torque
> setup script distributed after torque 2.2.0".
> Thanks again,
> Roger
> --
> Roger Williams, GNS Science, New Zealand : www.gns.cri.nz : xyzzy
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers

Use it up; wear it out; make it do or do without! - L Reid
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20081221/2b732002/attachment.html

More information about the torqueusers mailing list