Fwd: [torqueusers] Submiting jobs to multicore/processor nodes

carlos vasco carles.vasco at gmail.com
Thu Dec 13 07:31:24 MST 2007


Sorry, I forgot to forward this solution to the list...

---------- Forwarded message ----------
From: carlos vasco <carles.vasco at gmail.com>
Date: Dec 13, 2007 3:28 PM
Subject: Re: [torqueusers] Submiting jobs to multicore/processor nodes
To: Glen Beane <glen.beane at gmail.com>


Great, it works now.

I have specified

ENABLEMULTIREQJOBS      TRUE

on the maui.cfg configuration file.

Thanks a lot,
Carlos



On Dec 13, 2007 3:18 PM, Glen Beane < glen.beane at gmail.com> wrote:

> That is a scheduler configuration issue, not a torque issue.
>
> your scheduler has put a hold on the job because it is configured to not
> allow multi-request PBS jobs.  I believe in Maui you need to enable this -
> It should work fine "out of the box" with Moab or the simple FIFO torque
> scheduler.
>
>
> On Dec 13, 2007 9:16 AM, carlos vasco <carles.vasco at gmail.com> wrote:
>
> >
> > It doesn't work either, I tried it. checkjob tells that:
> >
> > IWD: [NONE]  Executable:  [NONE]
> > Bypass: 0  StartCount: 0
> > PartitionMask: [ALL]
> > Holds:    Batch  (hold reason:  PolicyViolation)
> > Messages:  multi-req PBS jobs not allowed
> > PE:  6.00  StartPriority:  1
> > cannot select job 6245 for partition DEFAULT (job hold active)
> >
> > Regards,
> > Carlos
> >
> >
> >
> > On Dec 13, 2007 3:08 PM, Glen Beane < glen.beane at gmail.com> wrote:
> >
> > >
> > >
> > > On Dec 13, 2007 5:15 AM, carlos vasco <carles.vasco at gmail.com> wrote:
> > >
> > > > Dear all,
> > > >
> > > > We have a cluster with SMP/dual core nodes, each one has two
> > > > processors with two cores each. In the nodes file we have 4 virtual
> > > > processors defined.
> > > >
> > > > We use to submit 2,4 and 6 processes jobs, and we find more
> > > > efficient to pack the jobs into a node, because of our slow network.
> > > > The 2 and 4 cases are easy and we put in our scritpt:
> > > >
> > > > #PBS -l nodes=1:ppn=2
> > > > #PBS -l nodes=1:ppn=4
> > > >
> > > > but the case of 6, we would like to run 4 in one node and 2 in other
> > > > node, doesn't run. We, following the man page, put:
> > > > #PBS -l nodes=1:ppn=4+ppn=2
> > > >
> > > > Any idea of how to impose that configuration?
> > > >
> > >
> > > #PBS -l nodes=1:ppn=4+1:ppn=2
> > >
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20071213/a0bff1d0/attachment.html


More information about the torqueusers mailing list