[torqueusers] Torque memory allocation

Fan Dong fan.dong at ymail.com
Mon Apr 12 20:21:15 MDT 2010

Hi there,

I am running into a problem described as the follows:
1) we have some memory intensive java jobs to run through Torque, each of the jobs requires 12Gb of memory and each nodes in the cluster has 16Gb of memory.
2) when a job is running on one of the node, Torque does not prevent the new job (requiring 12Gb memory as well) from starting on the same node, causing that new job fails because  there is no enough memory.  (We already let Torque to scatter the jobs cross the nodes, but this will happen when there are more jobs than nodes)
3) tried use -l mem=12gb, but did not work.  Torque seems to have a 4Gb limit for this setting.  

I was wondering if there is any solution for that.  We are not using Moab or Maui.

Any input is highly appreciated.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20100412/66f23d44/attachment-0001.html 

More information about the torqueusers mailing list