[torqueusers] Any nodes available submission

Walid walid.shaari at gmail.com
Thu Jun 7 05:34:04 MDT 2007


On 6/7/07, Lennart Karlsson <Lennart.Karlsson at nsc.liu.se> wrote:
>
>
> Nearly all of the applications we run here cannot fit on any
> choice of walltime and number of nodes, so I have difficulties
> understanding your scenario.
>
> E.g., are your nodes most of the time free, i.e. without job
> reservations? Can your users accept that the job is queued
> for some time before job start? Do your users want to run jobs
> with infinite walltime settings? If you can get the users to
> set the walltime limit and number of nodes, and accept to wait
> for jobs to start, you are able to utilize your nodes much better,
> i.e. you may save money.


Dear Lennart,

The nodes are mostly busy most of the time, these clusters are
utilized 99.9% the jobs are around 2-3 days long seismic jobs, and
there are lots of them
queued for a couple of days at least before they are due to run. now what we
have is a number of clusters each is 128 node, what we want to do is to
create a big pool of nodes and utilize the nodeset to ensure network
locality for the job/cluster, and as there will be nodes down for one reason
or another, or nodes that are under testing, we would like to the scheduler
to know by it self what is the maximum number of nodes it can allocate to
the single job, that is the user script will say either full, half, and
quarter that is qsub=%resources not -l nodes=Number required. and according
to who many online nodes available the full would be 128 or a little bit
less, half would b 64 nodes or little bit less, we would like this to be
automated, Load leveler does that, and i was wondering if this is possible
with Moab/Maui & Torque

I like the idea of launch script, but I would like to see if there are any
facilities available already in Moab/Maui/torque that could do this for  us.

regards

Walid.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20070607/d19511c6/attachment.html


More information about the torqueusers mailing list