[torqueusers] mem and pmem

Garrick Staples garrick at usc.edu
Mon Aug 15 13:12:45 MDT 2005


On Wed, Aug 10, 2005 at 02:21:16PM -0500, Laurence Dawson alleged:
> When this is first submitted, checkjob shows it looking ok with
> 
> Req[0]  TaskCount: 64  Partition: ALL
> Network: [NONE]  Memory >= 400M  Disk >= 0  Swap >= 0
> Opsys: [NONE]  Arch: [NONE]  Features: opteron,myrinet
> Dedicated Resources Per Task: PROCS: 1  MEM: 400M
> NodeCount: 1
> 
> This should be no problem, but our cluster is busy, so it is not 
> scheduled yet and stays in idle.
> Then sometime later (maybe 10 minutes - but long enough for the 
> scheduler to start looking at resources available, this switches to look 
> like below:
> 
> Req[0]  TaskCount: 64  Partition: ALL
> Network: [NONE]  Memory >= 400M  Disk >= 0  Swap >= 0
> Opsys: [NONE]  Arch: [NONE]  Features: opteron,myrinet
> Dedicated Resources Per Task: PROCS: 1  MEM: 25G
> NodeCount: 1

Compare the output of 'qstat -f' before and after the change.  That should tell
you if the problem is within TORQUE.

-- 
Garrick Staples, Linux/HPCC Administrator
University of Southern California
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://www.supercluster.org/pipermail/torqueusers/attachments/20050815/f1ca1a8f/attachment.bin


More information about the torqueusers mailing list