[torqueusers] Efficiently using nodes with multi-core CPU

Jerry Smith jdsmit at sandia.gov
Thu Apr 24 14:29:30 MDT 2008


You can also set

NODEACCESSPOLICY  SINGLEJOB

To allow only 1 job per node, and not have to worry about over running 
cpuload and the resulting loading of the node.

Jerry

Daniel Bourque wrote:
> You accomplish that in Maui using ( in maui.cfg )
>
> NODEALLOCATIONPOLICY  CPULOAD
> NODEAVAILABILITYPOLICY UTILIZED
>
> jobs will be packed onto nodes according to load average and unused CPUs
> ( cpus as defined in torque, not actual CPUs ) , until you run out of
> nodes with acceptable load  average or nodes in available cpus.
>
> I'm also new at torque/maui, so if I'm not doing this properly, someone
> feel free to enlighten me.
>
> Thanks
>
> Daniel Bourque
> Sr. Systems Engineer
> WeatherData Service Inc
> An Accuweather Company
>
> Office (316) 266-8013
> Office (316) 265-9127 ext. 3013
> Mobile (316) 640-1024
>
>
>
> Zerony Zhao wrote:
>
>   
>> Dear maillist users,
>> I am new to torque PBS. I have a naive question about setting policy
>> of using nodes with multi-core CPU.
>> Currently  I set every user can request 1 or more nodes, and 1 or more
>> cores in each node. But if one user requests 1 node with 1 core, then
>> another user requests 1 node with 1 core, then the two jobs are
>> running on the same node, but for efficiency, I wish the second job
>> runs on the another idle node? How should I modify the configuration
>> file? Appreciate your help.
>>
>> Zerony
>> _______________________________________________
>> torqueusers mailing list
>> torqueusers at supercluster.org
>> http://www.supercluster.org/mailman/listinfo/torqueusers
>>
>>
>>
>>     
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers
>
>
>   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20080424/98241160/attachment.html


More information about the torqueusers mailing list