[torqueusers] specific nodes

Ricardo Román Brenes roman.ricardo at gmail.com
Wed Nov 30 15:38:44 MST 2011

Thank you so much for your help =) yet I still have matters to discuss.

On Wed, Nov 30, 2011 at 4:22 PM, Gustavo Correa <gus at ldeo.columbia.edu>wrote:

> You don't have 8 CPUs of type 'uno'.
> This seems to conflict with your mpirun command with -np=8.
> You need to match the number of processors you request from Torque and
> the number of processes you launch with mpirun.

1. Why there has to be a match between processors and processes? i could
run 1024 process in 1 processor (without torque). Requesting 2 nodes i
could spawn 10000 processes...

> Also, you wrote:
> #PPS -q uno
> Is this a typo in your email or in your Torque submission script?
> It should be:
> #PBS -q uno
> In addition, your PBS script doesn't request nodes, something like
> #PBS -l nodes=1:ppn=2
> I suppose it will use the default for the queue uno.
> However, your qmgr configuation doesn't set a default number of nodes to
> use,
> either for the queues or for the server itself.
> You could do:
> qmgr -c 'set queue uno resources_default.nodes = 1'
> and likewise for queue dos.

2. thats in fact a type. In the script it says #PBS

> More important, is your mpi [and mpiexec] built with Torque support?
> For instance, OpenMPI can be built with Torque support, so that it
> will use the nodes provided by Torque to run the job.
> However, stock packaged MPIs from yum or apt-get are probably not
> integrated with Torque.
> You would need to build it from source, which is not really hard.
> If you use an mpi that is not integrated with Torque, you need to pass to
> mpirun/mpiexec
> the file created by Torque with the node list.
> The file name is held by the environment variable $PBS_NODEFILE.
> The syntax vary depending on which mpi you are using, check your mpirun
> man page,
> but should be something like:
> mpirun -hostfile $PBS_NODEFILE -np 2  ./a.out
3. My MPICH2 is version 1.2.1p1. I dont recall if i compiled it with torque
support. Even so i dont' have a vairable $PBS_NODEFILE. (doing a
"echo $PBS_NODEFILE" returns an empty line).

4. I dont know if this is my problem or not but you talk about mpirun and
mpiexec like if they were the same, yet i have used mpiexec most of the
time and im not sure about the similiarities (or differences). You asked if
my MPIEXEC is built with torque but a few lines below you mention MPIRUN

> [ The flag may be -machinefile instead of -hostfile, or something else,
> depending on your MPI.]
> On Nov 30, 2011, at 4:11 PM, Ricardo Román Brenes wrote:
> > Ill post some more info since im pretty desperate right now :P
> >
> Oh, yes.
> You should always do this, if you want help from the list.
> Do you see how much more help you get when you give all the information?
>  :)
> I hope this helps,
> Gus Correa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20111130/5928cabb/attachment.html 

More information about the torqueusers mailing list