[torqueusers] Simple Q. about controlling CPU utilization per user

Coyle, James J [ITACD] jjc at iastate.edu
Tue Apr 3 13:29:06 MDT 2012


Two suggestions:

1)       You could try adding:

renice +18 $$

into the script that starts the mom on the compute nodes
This would cause all torque jobs to run  with nice priority 18
(one above the minimal priority).


2)        One other possibility is to create the file

#!/bin/csh -f

renice 18  -u  $2

and issue
chmod u+x /var/spool/torque/mom_priv/prologue

do this on all your compute nodes.

  The causes all processes on the script currently owned by the user who
submitted the job to be priority 18, along with any child tasks of those processes.

  You could even test for specific users, maybe exempting the person who bought the machine
if they are just letting others use it.


  You might #2  (perhaps with 12 rather than 18) if you are a in a research group letting
grad students use your desktop Linux machine, while you use it interactively.  I actually did this,
limiting to ½ the machine memory and renicing all batch jobs to 12, and exempting myself.

  I didn't see much slowdown  in my interactive work, and the students used 15% of all the
machine cycles over the 5 years I had the machine set up this way.  The only cost was the
extra memory I needed on my machine, and one extra disk for scratch. Then they did not all
have to get large memory machines, which would have been idle most of the time.

James Coyle, PhD
High Performance Computing Group
 Iowa State Univ.
web: http://jjc.public.iastate.edu/<http://www.public.iastate.edu/~jjc>

From: torqueusers-bounces at supercluster.org [mailto:torqueusers-bounces at supercluster.org] On Behalf Of Ian Miller
Sent: Monday, March 26, 2012 4:36 PM
To: Torque Users Mailing List
Subject: [torqueusers] Simple Q. about controlling CPU utilization per user

Hi All,
Is their a simple switch or config edit to curb the CPU utilization per job submitted in torque?  I'm running the 3.0.3.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20120403/0633442c/attachment.html 

More information about the torqueusers mailing list