[torqueusers] Torque not killing job exceeding memoryrequested

Nick Sonneveld Nicholas.Sonneveld at utas.edu.au
Wed Jan 17 22:32:28 MST 2007


Sorry to piggyback on, but I have similar issues.

eg this job:
whiteout:/var/spool/torque/server_priv # qstat -f 1925 | grep mem
     resources_used.mem = 54043696kb   *this is about 51gigs
     resources_used.vmem = 58629088kb
     Resource_List.mem = 1gb

I am using pbs_sched though and I've heard mentions throughout the list 
that it doesn't look at things like memory.  Should we consider maui?

- Nick


Troy Baer wrote:
> On Wed, 2007-01-17 at 11:04 -0600, Laurence Dawson wrote:
>> A user has two jobs running on a single (dual-dual processor box)
> node. 
>> It is exceeding the memory he requested, but torque is not killing 
>> it...why? Has anyone seen this on their configuration? We are running 
>> moab-4.5.0p4 and torque-2.1.0p0.
> 
> What OS/architecture?  And what does TORQUE report for memory usage vs.
> requested?  (I.e. "qstat -f jobid | grep mem")
> 
> 	--Troy

-- 
Nick Sonneveld  |  Nicholas.Sonneveld at utas.edu.au
IT Resources, University of Tasmania, Private Bag 69, Hobart Tas 7001
(03) 6226 6377  |  0407 336 309  |  Fax (03) 6226 7171


More information about the torqueusers mailing list