[torqueusers] Intel mpiexec support

Paul Van Allsburg vanallsburg at hope.edu
Mon Oct 31 13:46:29 MST 2005


David Golden wrote:
> On 2005-10-27 15:38:06 +0200, Jacques Foury wrote:
> 
>>Paul Van Allsburg a écrit :
>>
>>
>>>I was using Intel's MPI.  That's going to make it a little more 
>>>difficult to setup and run parallel jobs within Torque.
>>
>>we're using Intel Compilers (C/C++, Fortran) with MPICH2. We just had to 
>>configure/compile mpicc, mpif90... using the Intel compilers, and we're 
>>calling mpiexec within the Torque's batches.
>>
> 
> 
> N.B. Intel have an MPI implementation, too. [1]
> 
> Note again that there is confusion over mpiexec: There is an "mpiexec" from
> in OSC that interfaces mpich and various relatives to pbs and various
> relatives (e.g. torque).  [2]
> 
> But mpi-2 standardised the "mpiexec" name as the name for
> executables that kick off mpi processes (see "too many mpiexecs"
> at [2]), so some mpi-2 implementations now
> come with executables called  "mpiexec" that are no relation to the 
> OSC mpiexec[-for-pbs].
> 
> Some MPI implementations with "native" support for the PBS TM API,
> e.g. LAM, don't need the OSC mpiexec[-for-pbs] glue. 
> 
> I've never actually used Intel's MPI, but it's apparently based
> on mpich2 [3]. According to [2], mpiexec[-for-pbs] mpich2 support is also
> compatible with the Intel MPI, but you might want to scan through
> the mpiexec[-for-pbs] mailing list for issues with it and patches.
> 
> Worst comes  to worst, though, you can presumably kick off 
> whatever it needs based on the contents of the file
> pointed to by $PBS_NODEFILE. But that can lead to
> inaccurate accounting, and depressingly often stray processes that
> need to be reaped.
> 
> [1] http://www.intel.com/cd/software/products/asmo-na/eng/cluster/mpi/index.htm
> [2] http://www.osc.edu/~pw/mpiexec/
> [3]http://www.intel.com/cd/software/products/asmo-na/eng/cluster/mpi/219891.htm
> _______________________________________________
> torqueusers mailing list
> torqueusers at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torqueusers
> 

So it may be possible to use the mpiexec from [2] above instead of
having to shoehorn Intel's mpdboot/mpiexec/mpdallexit into a PBS script.
I got the mpiexec[2] release .80 and found an earlier patch

http://email.osc.edu/pipermail/mpiexec/2005/000468.html

that is in the .80 release.  Correct me if I'm wrong but I believe I 
should be able to use this release with Torque and my installed Intel 
MPI library.

Thanks,
Paul

My environment is running torque-1.2.0p2
Intel Fortran/C compilers
Intel Cluster toolkit 1.0 which installed:
	Intel(R) MPI Library for Linux* version 1.0p-28






More information about the torqueusers mailing list