[torqueusers] only 1 core jobs in the queue

Garrick Staples garrick at usc.edu
Mon Mar 2 14:24:52 MST 2009


On Mon, Mar 02, 2009 at 01:54:07PM +0100, web master alleged:
> Hi
> 
> I am very inexperienced with torque, and found myself trying to set up
> a queueing system on a // cluster, with 12 8-core nodes.
> Majority of users only needs to run single processor jobs, so I created
> a queue for them and a queue for // jobs, i.e. more than 1 core.
> 
> Fast queue specs:
> # Create and define queue fast
> #
> create queue fast
> set queue fast queue_type = Execution
> set queue fast max_running = 24
> set queue fast resources_max.ncpus = 1
> set queue fast resources_max.nodect = 1
> set queue fast resources_max.nodes = nodes=1:ppn=1
> set queue fast resources_max.size = 1
> set queue fast resources_max.walltime = 12:00:00
> set queue fast resources_default.ncpus = 1
> set queue fast resources_default.nodes = nodes=1:ppn=1
> set queue fast resources_default.walltime = 12:00:00
> set queue fast enabled = True
> set queue fast started = True
> 
> However, I cannot manage to stop jobs using up to 4 cores to be enqueued in
> the fast queue.
> 
> XXXXX at nodo0>qsub -q fast -l nodes=1:ppn=1 launch_single.sh
> 11002.nodo-ha
> XXXXX at nodo0> showq | grep MYUSERNAME
> 11002              MYUSERNAME    Running     1    12:00:00  Mon Mar  2 14:23:19
> giupponi at nodo0> qsub -q fast -l nodes=1:ppn=2 launch_single.sh
> 11003.nodo-ha
> XXXXX at nodo0> showq | grep MYUSERNAME
> 11003               MYUSERNAME   Running     2    12:00:00  Mon Mar  2 14:23:30
> giupponi at nodo0:~/Vesicles/pka_outside/TestQueues> qsub -q fast -l
> nodes=1:ppn=9 launch_single.sh
> qsub: Job exceeds queue resource limits MSG=cannot locate feasible nodes
> 
> launch_single.sh
> 
> #!/bin/bash
> #PBS -N sleep
> #PBS -e sleep.e
> #PBS -o sleep.o
> 
> sleep 10
> 
> Assuming I am not doing anything wrong, how can I obtain the desired behaviour?
> If not with torque, with maui maybe?

Torque only counts nodes, e.g. resources_max.nodect.  nodes=1:ppn=1,
nodes=1:ppn=2, and nodes=1:ppn=9 are all 1 node.

-- 
Garrick Staples, GNU/Linux HPCC SysAdmin
University of Southern California

See the Prop 8 Dishonor Roll at http://www.californiansagainsthate.com/

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://www.supercluster.org/pipermail/torqueusers/attachments/20090302/7386810d/attachment.bin


More information about the torqueusers mailing list