[Mauiusers] nodeallocationpolicy

Lawrence Sorrillo sorrillo at jlab.org
Wed Sep 10 08:52:56 MDT 2008


Indeed how is this done?

The job should run on only 4 of the 8 cpus available per node and be 
spread across 4 nodes. So yes, using 16 cpus.

-L


Greenseid, Joseph M. wrote:
> wait, you want 16 CPU, divided evenly over 4 nodes, is that right?
>  
> --Joe
>
> *From:* mauiusers-bounces at supercluster.org on behalf of A.Th.C. Hulst
> *Sent:* Mon 9/8/2008 7:41 AM
> *To:* mauiusers at supercluster.org
> *Subject:* Re: [Mauiusers] nodeallocationpolicy
>
> Ah, sorry, I've found it. With moab one can request nodes=4:tpn=4, 
> which does
> what I expect. I'll need moab if I want to try this.
>
> Thanks
>
> On Monday 08 September 2008 13:22:46 A.Th.C. Hulst wrote:
> > #PBS -l nodes=4:ppn=4
> > lamboot
> > mpiexec hello_mpi
> > lamhalt
> >
> > What happens is that the job is run on two nodes, each node running two
> > chunks of 4 "cpus" (which are actually cores). That is not really
> > desirable. I would like to divide the job over 4 nodes.
> >
> > My initial guess was that I could control node allocation with the
> > nodeallocationpolicy parameter, but I fail to get it working.
> >
> > Does anyone have any experience with a similar issue? Is it possible?
> >
> > Best regards,
> > Sander
> >
> > _______________________________________________
> > mauiusers mailing list
> > mauiusers at supercluster.org
> > http://www.supercluster.org/mailman/listinfo/mauiusers
>
>
>
> _______________________________________________
> mauiusers mailing list
> mauiusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/mauiusers
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> mauiusers mailing list
> mauiusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/mauiusers
>   




More information about the mauiusers mailing list