[torqueusers] resources_used. mem problems

Sreedhar Manchu sm4082 at nyu.edu
Thu Oct 18 09:59:16 MDT 2012


Hi Brock and Troy,

First, I thank both of you for your emails. One our old cluster OpenMPI wasn't coupled with Torque TM API where as it is on our new cluster. Just like Brock suggested I gripped for TM and found the difference.

Like Troy suggested, OpenMPI is accounting accurately on our new cluster. But for the reasons I mentioned before we don't want this behavior. So, we moved the 4 files from lib/openmpi/*tm* and now it is reporting the memory that's being used only on the rank=0.

Even though it is simple to change the memory statement from "#PBS -l mem=46GB" to "#PBS -l mem=(number of nodes requested*46)GB", we would like to keep it the way we do it on our old clusters. But I see that it is quite useful to have the real used memory for the entire job (sum of it on all the nodes) rather than just on the node with rank=0. Main thing is we don't have to pass the host file. May be there are other benefits we'll get with it being coupled with TM API. 

But what I'm not sure is, whether Moab/Torque kill the job if it tries to use memory more than it is allocated on one of the nodes (not with rank=0). I know that it does if the job tries to use memory more than what it is allocated on the node with rank=0.

Thanks,
Sreedhar.

On Oct 18, 2012, at 10:09 AM, Troy Baer <tbaer at utk.edu> wrote:

> On Thu, 2012-10-18 at 09:58 -0400, Sreedhar Manchu wrote:
>> Has anyone seen this behavior on your clusters? Given that it is
>> working fine with MVAPICH2 I'm thinking it has to do with OpenMPI
>> 1.4.5 (as it works fine with 1.4.3). We are testing 1.4.3 on our new
>> clusters and plan to test 1.4.5 on our old clusters. But I thought
>> it'd be useful to know whether anyone has any thoughts on it. Please
>> let me know.
> 
> It sounds to me that OpenMPI is doing the right thing here, in that it's
> launching processes through the TORQUE TM API so that its resource usage
> is being accounting accurately.  OTOH, I'm guessing that your MVAPICH2
> install is using either rsh or ssh to start remote processes, which does
> *NOT* handle resource usage accounting (or signal delivery) correctly.
> 
> I would recommend getting your MVAPICH2 install to use the TM API to
> launch processes, either using the mpiexec.hydra script that likely
> comes with MVAPICH2 or using OSC mpiexec [1].
> 
> [1]  https://www.osc.edu/~djohnson/mpiexec/index.php
> 
> 	--Troy
> -- 
> Troy Baer, Senior HPC System Administrator
> National Institute for Computational Sciences, University of Tennessee
> http://www.nics.tennessee.edu/
> Phone:  865-241-4233
> 
> 
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers



More information about the torqueusers mailing list