[torqueusers] How to run mpirun of intel on torque

David Roman David.Roman at noveltis.fr
Thu Dec 20 09:04:31 MST 2012


In fact I have always some problems with mpiexec.


I have this short code :


  program simple4
      
      implicit none
      
      integer ierr,my_rank,size,partner
      CHARACTER*50 greeting

      include 'mpif.h'
      integer status(MPI_STATUS_SIZE)


      call mpi_init(ierr)

      call mpi_comm_rank(MPI_COMM_WORLD,my_rank,ierr)
      call mpi_comm_size(MPI_COMM_WORLD,size,ierr)

      write(greeting,100) my_rank, size


      if(my_rank.eq.0) then
         write(6,*) greeting
         do partner=1,size-1
         call mpi_recv(greeting, 50, MPI_CHARACTER, partner, 1, 
     &    MPI_COMM_WORLD, status, ierr)
            write(6,*) greeting
         end do
      else
         call mpi_send(greeting, 50, MPI_CHARACTER, 0, 1, 
     &    MPI_COMM_WORLD, ierr)
      end if

      if(my_rank.eq.0) then
         write(6,*) 'That is all for now!'
      end if

      call mpi_finalize(ierr)

 100  format('Hello World: processor ', I2, ' of ', I2)

      End




If I use mpirun of intel I have this result :
roman at hpc-node11:~/test$ mpirun -genv I_MPI_FABRICS_LIST tmi ./a.out 
 Hello World: processor  0 of  8                   
 Hello World: processor  1 of  8                   
 Hello World: processor  2 of  8                   
 Hello World: processor  3 of  8                   
 Hello World: processor  4 of  8                   
 Hello World: processor  5 of  8                   
 Hello World: processor  6 of  8                   
 Hello World: processor  7 of  8                   
 That is all for now!

But now if I use mpiexec :

roman at hpc-node11:~/test$ /NOVELTIS/roman/bin/mpiexec/bin/mpiexec -v ./a.out 
mpiexec: resolve_exe: using absolute path "./a.out".
node  0: name hpc-node11, cpu avail 4
node  1: name hpc-node10, cpu avail 4
 Hello World: processor  0 of  1                   
 That is all for now!
 Hello World: processor  0 of  1                   
 That is all for now!
 Hello World: processor  0 of  1                   
 That is all for now!
 Hello World: processor  0 of  1                   
 That is all for now!
 Hello World: processor  0 of  1                   
 That is all for now!
 Hello World: processor  0 of  1                   
 That is all for now!
 Hello World: processor  0 of  1                   
 That is all for now!
 Hello World: processor  0 of  1                   
 That is all for now!
mpiexec: process_start_event: evt 2 task 0 on hpc-node11.
mpiexec: process_start_event: evt 3 task 1 on hpc-node11.
mpiexec: process_start_event: evt 6 task 4 on hpc-node10.
mpiexec: process_start_event: evt 4 task 2 on hpc-node11.
mpiexec: process_start_event: evt 7 task 5 on hpc-node10.
mpiexec: process_start_event: evt 5 task 3 on hpc-node11.
mpiexec: process_start_event: evt 8 task 6 on hpc-node10.
mpiexec: process_start_event: evt 9 task 7 on hpc-node10.
mpiexec: All 8 tasks (spawn 0) started.
mpiexec: wait_tasks: waiting for hpc-node11 hpc-node11 and 6 others.
mpiexec: process_obit_event: evt 10 task 0 on hpc-node11 stat 0.
mpiexec: process_obit_event: evt 12 task 4 on hpc-node10 stat 0.
mpiexec: process_obit_event: evt 14 task 5 on hpc-node10 stat 0.
mpiexec: process_obit_event: evt 16 task 6 on hpc-node10 stat 0.
mpiexec: process_obit_event: evt 17 task 7 on hpc-node10 stat 0.
mpiexec: process_obit_event: evt 11 task 1 on hpc-node11 stat 0.
mpiexec: process_obit_event: evt 13 task 2 on hpc-node11 stat 0.
mpiexec: process_obit_event: evt 15 task 3 on hpc-node11 stat 0.

All process are launched, but the number of cpus and the rank are not correctly read.















-----Message d'origine-----
De : torqueusers-bounces at supercluster.org [mailto:torqueusers-bounces at supercluster.org] De la part de David Roman
Envoyé : jeudi 20 décembre 2012 13:00
À : 'Torque Users Mailing List'
Objet : Re: [torqueusers] How to run mpirun of intel on torque

I used torque 4.1.0, and with version mpiexec failed.
I removes this version, and I installed version 2.4.8 (with aptitude under debian) With this version mpiexec not failed, I have am other problem, but I think it's my code whose had a bug

David


-----Message d'origine-----
De : torqueusers-bounces at supercluster.org [mailto:torqueusers-bounces at supercluster.org] De la part de Chris Samuel Envoyé : jeudi 20 décembre 2012 12:55 À : torqueusers at supercluster.org Objet : Re: [torqueusers] How to run mpirun of intel on torque

On Thu, 20 Dec 2012 10:11:58 AM David Roman wrote:

> Yes, i did this after my reply
>
> Did this test
> 
> echo 'hpc-node15: hostname' | mpiexec --comm=none -nostdin -config=-
> 
> But I have a segmentation fault

Umm, yes, quite probably. :-)

> I read the documentation to find my mistake

You should just need to do:

mpiexec program arguments

replacing program and arguments with the executable and any arguments you need to pass to it.

Hope that helps!
Chris
-- 
   Christopher Samuel - Senior Systems Administrator  VLSCI - Victorian Life Sciences Computation Initiative
 Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545
         http://www.vlsci.unimelb.edu.au/

_______________________________________________
torqueusers mailing list
torqueusers at supercluster.org
http://www.supercluster.org/mailman/listinfo/torqueusers
_______________________________________________
torqueusers mailing list
torqueusers at supercluster.org
http://www.supercluster.org/mailman/listinfo/torqueusers


More information about the torqueusers mailing list