[torqueusers] Question about virtual processor allocation

Jonas_Berlin at harte-hanks.com Jonas_Berlin at harte-hanks.com
Wed Feb 1 06:40:29 MST 2006


I am currently using the latest release of Torque and I am using the 
default fifo scheduler.

I have several machines in my cluster some 2CPU and some 4CPU. To me a CPU 
is a CPU and I don't care if they are located on a 2CPU machine or a 4CPU 
machine.

My nodes file looks like

machine1 np=2
machine2 np=2
machine3 np=4

When I submit a job I would like to have a VP allocated to each of the 
partitions of my process and I don't care where they come from, except I 
would like them to be packed together as much as possible.

When I submit a job I would like to simply tell it how many CPUs to run 
the job on and have Torque pick the appropriate nodes.

If I specify 

-l nodes=2 on the command line the first job gets allocated to VPs 
(machine1/0, machine2/0) and the second job get allocated (machine1/1, 
machine2/1)

This has the issue that the two jobs get allocated across two machines 
each causing unnecessary network communication as the jobs run.

To force the two VPs to get allocated on the same machine I have to say

-l nodes=2:ppn=2 for each one of the job submissions. In this case the two 
jobs get allocated as (machine1/0, machine1/1) and (machine2/0, 
machine2/1), which is what I want.

This gets more and more unworkable for larger jobs, since I have to 
basically figure out how to allocate the jobs rather than having Torque 
doing it for me.

What I want is basically an easy way for users to say that they want to 
run an x-way parallel job and have the VPs allocated starting on one 
machine, filling that machine up completely until it has to go the the 
next machine, etc. (node_pack seems to have no effect at all).

Any ideas?? Do I have to get a real scheduler like Maui to be able to do 
this?

Thanks,

Jonas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20060201/c742a66d/attachment.html


More information about the torqueusers mailing list