[torqueusers] vmem and pvmem
siegert at sfu.ca
Fri Feb 24 15:00:09 MST 2012
On Fri, Feb 24, 2012 at 11:19:37AM +0100, "Mgr. Šimon Tóth" wrote:
> > Core_req vmem pvmem ulimit-v RPT
> > =========================================
> > nodes=1:ppn=2 1gb 256mb 256mb 512mb
> > procs=2 1gb 256mb 256mb 1gb
> > nodes=1:ppn=2 1gb 4gb 1gb 4gb
> > procs=2 1gb 4gb 1gb 4gb
> > nodes=1:ppn=2 1gb - 1gb 512mb
> > procs=2 1gb - 1gb 1gb
> > So the ulimit value that influences whether a task can allocate
> > memory, is set as the lower of the vmem and pvmem values. That
> > makes some sense - at least more sense than taking the larger
> > value. What doesn't make sense is allowing pvmem to be higher
> > than vmem in the first place - in that case torque should probably
> > reject the job or 'fix' one of the settings but leaving it as is
> > might not be so bad, except for moab's behaviour (keep reading).
> No. The logic is as follows:
> * if pvmem (or pmem) is set
> then set the corresponding ulimit to pvmem (pmem) value
> * if pvmem (or pmem) isn't set
> then set the corresponding ulimit to vmem (mem) value
> Note that using pvmem is mostly pointless. On Linux this represents
> address space, not virtual memory.
> You can use vmem as virtual memory, but even that is extremely confusing.
I do not understand this comment. Both pvmem and vmem requests will
result in RLIMIT_AS getting set.
When I submit a MPI job using, e.g., procs=N, why is requesting
pvmem=X mostly pointless? Shouldn't it be totally equivalent to
requesting vmem=X*N ?
More information about the torqueusers