[torqueusers] setting an upper value for available memory on a node

Moye,Roger V RVMoye at mdanderson.org
Wed Sep 18 09:37:26 MDT 2013

I believe you can also encourage the users to use the pmem option in their submissions to request the amount of memory that they need.  Then a job will not be assigned to a node with insufficient memory.  Of course, this means that you are relying on the user to specify pmem, and that the value they request is accurate.


Roger V. Moye
Systems Analyst III
XSEDE Campus Champion
University of Texas - MD Anderson Cancer Center
Division of Quantitative Sciences
Pickens Academic Tower - FCT4.6109
Houston, Texas
(713) 792-2134

From: torqueusers-bounces at supercluster.org [mailto:torqueusers-bounces at supercluster.org] On Behalf Of Andrus, Brian Contractor
Sent: Wednesday, September 18, 2013 10:21 AM
To: Mahmood Naderan; Torque Users Mailing List
Cc: mauiusers-request at supercluster.org
Subject: Re: [torqueusers] setting an upper value for available memory on a node

If you are using Maui (which I assume you are since you sent your request to that list as well), you should look at NODEALLOCATIONPOLICY.
If you set that to PRIORITY and then use AMEM to set the priority of nodes, jobs go to the node with the most available memory first.

Here is an example from the manual:

Example 1: Favor the fastest nodes with the most available memory which are running the fewest jobs


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

From: torqueusers-bounces at supercluster.org<mailto:torqueusers-bounces at supercluster.org> [mailto:torqueusers-bounces at supercluster.org] On Behalf Of Mahmood Naderan
Sent: Saturday, September 14, 2013 10:12 PM
To: torque cluster
Cc: mauiusers-request at supercluster.org<mailto:mauiusers-request at supercluster.org>
Subject: [torqueusers] setting an upper value for available memory on a node

Dear all,
Is there any way to set a maximum value for used memory on a node? Assume a node has 32 cores with 64GB total shared memory. Now there are 20 running jobs (12 cores are idle) but 60GB of memory is used. Now I want to set a policy to defer an incoming job because there is not enough memory.

Any idea to accomplish that?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20130918/fe52641b/attachment-0001.html 

More information about the torqueusers mailing list