[torqueusers] pbs_mom consuming 19 Gbytes of memory on idle nodes after a few weeks.

Eva Hocks hocks at sdsc.edu
Tue Oct 22 15:18:58 MDT 2013



Ken,

when will 4.2.6 be available?

Thanks
Eva


On Tue, 22 Oct 2013, Ken Nielson wrote:

> We have fixed a large memory leak in pbs_mom in upcoming releases of 4.2.6
> and 4.5.0.
>
>
>
>
> On Mon, Oct 21, 2013 at 6:00 PM, Coyle, James J [ITACD] <jjc at iastate.edu>wrote:
>
> >  I’m running Torque version 4.2.2 under Redhat Enterprise Linux 6.3****
> >
> > and pbs_mom starts out after a reboot using a small amount of virtual and*
> > ***
> >
> > resident memory (VIRT and RES in the top –a listings below)****
> >
> > After running for a while I about 19Gbytes for each.****
> >
> > ** **
> >
> >    Is this a known problem?  ** **
> >
> > Is there a fix?****
> >
> > ** **
> >
> > Thanks,****
> >
> > **-         **Jim C.****
> >
> > ** **
> >
> > Just after reboot****
> >
> > ** **
> >
> >   PID USER      PR  NI  VIRT  RES  SHR     S %CPU %MEM    TIME+
> > COMMAND        ****
> >
> > 2991 root      20   0 96876  48m 9112 S   0.7      0.0      0:01.07
> >   pbs_mom****
> >
> > ** **
> >
> > ** **
> >
> > From a server that has been up a few weeks:****
> >
> >   ****
> >
> >   PID USER      PR  NI  VIRT   RES   SHR   S  %CPU  %MEM    TIME+
> >    COMMAND                                         ****
> >
> >  7330 root      20   0  19.1g  19g  9112 S   0.0      15.2     123:15.95
> >   pbs_mom   ****
> >
> > ** **
> >
> > The 19.1 and 19 Gbytes seems consistent for those nodes that exhibit this
> > issue.****
> >
> > ** **
> >
> >   ****
> >
> >  James Coyle, PhD****
> >
> > High Performance Computing Group     ****
> >
> >  217 Durham Center            ****
> >
> >  Iowa State Univ.           phone: (515)-294-2099****
> >
> > Ames, Iowa 50011           web: http://jjc.public.iastate.edu/****
> >
> > ** **
> >
> > _______________________________________________
> > torqueusers mailing list
> > torqueusers at supercluster.org
> > http://www.supercluster.org/mailman/listinfo/torqueusers
> >
> >
>
>
>



More information about the torqueusers mailing list