[torqueusers] mem and pmem

Laurence Dawson larry.dawson at vanderbilt.edu
Tue Aug 16 08:03:12 MDT 2005


Thanks,
Our cluster is down for enhancements at the moment, so I can't check 
your suggestion yet - but it looks like this is a moab problem. The guys 
at supercluster.org are looking at it.

Garrick Staples wrote:

>On Wed, Aug 10, 2005 at 02:21:16PM -0500, Laurence Dawson alleged:
>  
>
>>When this is first submitted, checkjob shows it looking ok with
>>
>>Req[0]  TaskCount: 64  Partition: ALL
>>Network: [NONE]  Memory >= 400M  Disk >= 0  Swap >= 0
>>Opsys: [NONE]  Arch: [NONE]  Features: opteron,myrinet
>>Dedicated Resources Per Task: PROCS: 1  MEM: 400M
>>NodeCount: 1
>>
>>This should be no problem, but our cluster is busy, so it is not 
>>scheduled yet and stays in idle.
>>Then sometime later (maybe 10 minutes - but long enough for the 
>>scheduler to start looking at resources available, this switches to look 
>>like below:
>>
>>Req[0]  TaskCount: 64  Partition: ALL
>>Network: [NONE]  Memory >= 400M  Disk >= 0  Swap >= 0
>>Opsys: [NONE]  Arch: [NONE]  Features: opteron,myrinet
>>Dedicated Resources Per Task: PROCS: 1  MEM: 25G
>>NodeCount: 1
>>    
>>
>
>Compare the output of 'qstat -f' before and after the change.  That should tell
>you if the problem is within TORQUE.
>
>  
>
>------------------------------------------------------------------------
>
>_______________________________________________
>torqueusers mailing list
>torqueusers at supercluster.org
>http://www.supercluster.org/mailman/listinfo/torqueusers
>  
>


More information about the torqueusers mailing list