[Mauiusers] nodeavailabilitypolicy

Renato Borges renato.callado.borges at gmail.com
Thu Dec 16 10:43:50 MST 2010


Hi Abhi!

On Thu, Dec 16, 2010 at 2:32 PM, Abhishek Gupta <abhig at princeton.edu> wrote:

>  Hi Renato,
> Its not increasing memory, but if I say I need mem=6gb or pmem=6gb, it
> still goes to the node with total memory less than 6gb. So I thought by
> setting the NODEAVAILABILITYPOLICY, I will be able to define availability on
> the bases of memory.
> Like we define np= in nodes file, do we have to define memory resources
> too?
> Thanks,
> Abhi.
>

I have been re-reading the documentation and I think every time what you are
doing is correct and should work.

I have not set a NODEAVAILABILITYPOLICY for my site, and I have a nodeXXX
which has 16GB of RAM. I tried running:

qsub -l pmem=4GB,host=nodeXXX sleep60job.sh
qsub -l pmem=4GB,host=nodeXXX sleep60job.sh
qsub -l pmem=4GB,host=nodeXXX sleep60job.sh

And the three jobs ran on the same node, immediately (it was idle).

Then I tried:

qsub -l pmem=15GB,host=nodeXXX sleep60job.sh
qsub -l pmem=15GB,host=nodeXXX sleep60job.sh
qsub -l pmem=15GB,host=nodeXXX sleep60job.sh

And the first job ran, the others got queued, and then another ran, lefting
the last one in the queue, and then the last one ran.

So, from my point of view, this is working. If you attempt to do something
similar, where does it fail?

Cheers,
Renato.


> Renato Borges wrote:
>
> Hi Abhi!
>
> On Wed, Dec 15, 2010 at 7:21 PM, Abhishek Gupta <abhig at princeton.edu>wrote:
>
>> Hi,
>>
>> I am trying to figure out the way so that memory usage does not exceed
>> the available memory on a node. I was thinking that this parameter (
>> NODEAVAILABILITYPOLICY COMBINED:MEM ) should check the availability of
>> node on the bases of memory available, but it does not.
>> Is there anything else I need to add to make it work?
>> NODEAVAILABILITYPOLICY COMBINED:MEM
>>
>> Thanks,
>> Abhi.
>>
>
> I´ve never used NODEAVAILABILITYPOLICY, but I have a similar problem, which
> is: the jobs we run at my site start out with a small memory footprint, and
> end with large amounts of data in memory (in virtualization lingo, they
> "balloon"). Maybe this is also your case, and this is why setting this
> variable doesn`t work?
>
> To avoid swapping, I have set a MAXJOBPERUSER variable for each compute
> node, because all of our jobs that have an increasing memory footprint come
> from a single user (actually, a grid account).
>
> (...)

-- 
Renato Callado Borges
Lab Specialist - DFN/IF/USP
Email: rborges at dfn.ifusp.br
Phone: +55 11 3091 7105
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/mauiusers/attachments/20101216/4edd8520/attachment-0001.html 


More information about the mauiusers mailing list