[torqueusers] Resource limits per node

Torsten Schenkel hi93-ml at isl.mach.uni-karlsruhe.de
Tue Jun 28 11:25:00 MDT 2005


Am Dienstag, den 28.06.2005, 13:17 -0400 schrieb Troy Baer:
> On Tue, 2005-06-28 at 19:09 +0200, Torsten Schenkel wrote:
> > The problem didn't show up in my first tests because it only shows up
> > with parallel jobs.
> > 
> > A single job will be scheduled according to the memory requirements as
> > intended. So I guess it's a problem with the calculation of requested
> > memory for parallel jobs. As I understood it's memory required per node,
> > not per toto.
> 
> Assuming a) this is on Linux and b) TORQUE hasn't changed anything
> substantial about memory accounting from OpenPBS:

Yes, it's Linux.

> mem = real memory (RSS) used across the entire job

That's what maui seems to use for scheduling the job and assigning the
nodes.

> vmem = maximum virtual memory used on any single node in the job

> Note that this is what PBS/TORQUE thinks these limits mean; it's been my
> experience that older versions of Maui/Moab some times did not agree.

To me it seems to be the other way round :-)

The scheduler uses mem as real memory per job and thus assigns a
nodes=2,mem=700mb job to the 512mb nodes. The ulimiter on the Opteron
machines seems to use it as memory limit per node. So there seems to be
a discrepancy here.

Regards,

Torsten
-- 
Dr.-Ing. Torsten Schenkel        Labor-/Softwarelaborleiter       
Institut fuer Stroemungslehre    TEL.: ++49 721 608-3031
Universitaet Karlsruhe (TH)      FAX.: ++49 721 69 67 27
GPG: 1024D/D721BAD3



More information about the torqueusers mailing list