[torqueusers] confused by mpiexec/mpirun with PBS

Salvatore Di Nardo salvatore.dinardo at itb.cnr.it
Fri Feb 11 02:36:10 MST 2005


hem ... no one can enlight me about that?

On Wed, 2005-02-09 at 12:28, Salvatore Di Nardo wrote:

> Hi all,
> i have a little cluster with a master and 8 nodes ( master and nodes
> have 2 AMDx64 procs). We use this cluster to run several standard
> tools (non mpi) and mpiblast.
> 
> Since now we ran mpiblast outside pbs ( using lamboot and mpirun ),
> but this have the issue that lambboot do not consider nodes already
> busy by pbs ( and vice-versa) so i tryed to run mpiblast subliting it
> with qsub, but i'm a bit confused.
> 
> If i run mpiblast with mpirun i still to launch lamboot with hes own
> config file, and this mean that mpiblast will be ran on nodes already
> busy from other jobs.
> 
> If i use mpiexec i must give a "machine list" where tu run it, so i
> must give free nodes.
> 
> There is any way to tell pbs to use ( as example) 8 procs on free
> nodes, avoiding to specify directly what nodes are free? PBS is
> supposed to do this work so i expect that lauching mpiblast ( using 8
> processors) via pbs mean that pbs will automatically schedule it on
> free nodes ( or at least with 1 proc free) and not that i need to give
> a lamboot file ( for mpirun ) or "machine list" to mpiexec that
> contains only free nodes. 
> 
> where i'm wrong?
> 
> Salvatore Di Nardo 
> 
> ______________________________________________________________________
> 
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://supercluster.org/mailman/listinfo/torqueusers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20050211/03fcd46d/attachment-0001.html


More information about the torqueusers mailing list