[torqueusers] forcing use of one processor per node?

Brock Palen brockp at umich.edu
Wed Apr 16 09:50:36 MDT 2008

Why not just do:

-l nodes=4:ppn=1,pmem=3gb

pmem = process memory, so each process must find 3 gb memory free.   
thus maui cant place all 4 on one 4 gb node.

Brock Palen
Center for Advanced Computing
brockp at umich.edu

On Apr 16, 2008, at 11:46 AM, Steve Young wrote:

> Hi John,
> 	Do you request the amount of memory in your batch scripts? I've  
> found that by doing so you shouldn't have to worry about this  
> scenario since if the memory isn't available for the second cpu  
> then it won't utilize the other half of the node. But if a job  
> comes along that can use the rest of the memory and the second cpu  
> you can still utilize it.
> 	The other thing I wonder is, is this an MPI job? I've found MPI  
> can be tricky and even though everything appears to work you really  
> have to pay attention to what nodes the job really runs on. Torque  
> may allocate the job to certain nodes but if MPI doesn't get this  
> information from Torque it will pick it's own nodes to run on. I  
> was able to solve that with OSC's version of mpiexec which talks  
> directly to torque for node allocation.
> I just tried a quick test here and I have similar results. I have  
> 4cpu nodes with 2gb of ram each node. I specify a 4cpu job like you  
> did and it stayed on the same node. But when I requested 4gb of ram  
> the job went to 4 different nodes.
> Anyhow, I'm not sure that answers your question but hopefully gives  
> you some more idea's to look at.
> -Steve
> On Apr 16, 2008, at 11:10 AM, John Young wrote:
>> I have a small cluster of dual processor machines.  Normally,
>> I would like to keep each processor busy, but sometimes (due
>> usually to memory constraints) I would like to force a job
>> that I want to run, say, on 4 processors to run on 4 separate
>> nodes each using only one processor, rather than on two nodes
>> using both processors.
>> I have tried using things like
>> #PBS -l nodes=4:ppn=1
>> and even
>> but it does not seem to matter.  I always end up using both
>> processors on two nodes.  'qstat -a' says that the job is
>> using 4 nodes, but it doesn't really...  :-(
>> Any ideas?
>> 						JY
>> ------------------------------------------------------------
>> John E. Young				NASA LaRC B1148/R226
>> Analytical Services and Materials, Inc.       (757) 864-8659
>> 'All ideas and opinions expressed in this communication are
>> those of the author alone and do not necessarily reflect the
>> ideas and opinions of anyone else.'
>> _______________________________________________
>> torqueusers mailing list
>> torqueusers at supercluster.org
>> http://www.supercluster.org/mailman/listinfo/torqueusers
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20080416/49498887/attachment-0001.html

More information about the torqueusers mailing list