[torqueusers] PBS nodes problem
Curtis W. Hillegas
curt at Princeton.EDU
Mon Oct 18 14:57:50 MDT 2004
The -lnodes=#:ppn=# syntax works very well for dedicated clusters where
jobs can be specifically allocated to maximize performance, etc. But in
shared clusters this causes confusion, especially for newer users. Many
users in a shared cluster would rather have their job start earlier with
a mix of one, two, or however many processors per node rather than wait
until there is an ideal block of free nodes available. Of course the
scheduler can be configured to fill in the processors more the way users
would like it, but the concept is sometimes confusing to the end user.
Why was the choice made not to allow the user to say "I want n
processors allocated however you can fit them on the nodes that
currently have unallocated processors?"
Danny Sternkopf wrote:
> ncpus should be used for big SMP machines only.
> It specifies the number of cpus for one single node.
> If you have one or two processors machines in
> cluster then you have to specify the number of nodes.
> --> 'lnodes:<number of nodes>:<number of cpus per node>
> For example: -lnodes=15:ppn=2 will request 30 CPUs
> on Dual machines.
> Best regards,
> On Sun, Oct 17, 2004 at 01:39:19AM +0800, torqueusers at supercluster.org wrote:
>>Is't compulsory to put "#PBS -l nodes" in the job script? Can I just
>>use "#PBS -l ncpus" in my script to submit job? My script failed to
>>work when I submit without declared no. of nodes? Pls help... Thank
>>torqueusers mailing list
>>torqueusers at supercluster.org
> torqueusers mailing list
> torqueusers at supercluster.org
More information about the torqueusers