MPI

Message Passing Interface (MPI) is a standardized and portable message-passing parallel programming model to function on distributed memory systems.

The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in different computer programming languages such as Fortran, C, C++ and Java. There are several well-tested and efficient implementations of MPI.

ARIS supported MPI implementations

MPI implementations

Hardware interface MPI flavour module version Execute
infiniband/ shared-memory Intel MPI intelmpi 5.0.3g, 5.1.1, 5.1.2, 5.1.3, 5.1.3.258, 2017.0, 2017.1, 2017.2, 2017.3, 2017.4, 2017.5, 2018.0 srun
infiniband/ shared-memory OpenMPI openmpi 1.8.5, 1.8.7, 1.8.8, 1.10.0, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.7, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.1.0, 2.1.1, 2.1.2, 3.0.0 srun

Intel MPI library

Intel MPI library

Intel MPI library website

Reference manual

Available versions

module avail intelmpi

------------------------------------------------------------ /apps/modulefiles/parallel ------------------------------------------------------------
intelmpi/2017.0         intelmpi/2017.2         intelmpi/2017.4         intelmpi/2018.0         intelmpi/5.1.1          intelmpi/5.1.3
intelmpi/2017.1         intelmpi/2017.3         intelmpi/2017.5         intelmpi/5.0.3(default) intelmpi/5.1.2          intelmpi/5.1.3.258
Language GNU INTEL PORTLAND
C mpicc mpiicc mpicc -cc=pgcc
C++ mpicxx mpicpc mpicc -cxx=pgc++
FORTRAN mpif90 mpiifort mpif90-fc=pgfortran

To select underlying compiler use the flag -cc=[compiler]

For example in order to use Intel MPI with gcc/4.9.2 underlying compiler

module load gnu/4.9.2
module load intelmpi/5.0.3

Now you can check the underlying compiler options , link flags and libraries.

mpicc -show

gcc -I/apps/compilers/intel/impi/5.0.3.048/intel64/include
-L/apps/compilers/intel/impi/5.0.3.048/intel64/lib/release_mt
-L/apps/compilers/intel/impi/5.0.3.048/intel64/lib -Xlinker --enable-new-dtags
-Xlinker -rpath -Xlinker
/apps/compilers/intel/impi/5.0.3.048/intel64/lib/release_mt -Xlinker -rpath
-Xlinker /apps/compilers/intel/impi/5.0.3.048/intel64/lib -Xlinker -rpath
-Xlinker /opt/intel/mpi-rt/5.0/intel64/lib/release_mt -Xlinker -rpath -Xlinker
/opt/intel/mpi-rt/5.0/intel64/lib -lmpifort -lmpi -lmpigi -ldl -lrt -lpthread

Respectively Intel MPI with icc/15.0.3

module load intel/15.0.3
module load intelmpi/5.0.3
mpiicc -show

icc -I/apps/compilers/intel/impi/5.0.3.048/intel64/include
-L/apps/compilers/intel/impi/5.0.3.048/intel64/lib/release_mt
-L/apps/compilers/intel/impi/5.0.3.048/intel64/lib -Xlinker --enable-new-dtags
-Xlinker -rpath -Xlinker
/apps/compilers/intel/impi/5.0.3.048/intel64/lib/release_mt -Xlinker -rpath
-Xlinker /apps/compilers/intel/impi/5.0.3.048/intel64/lib -Xlinker -rpath
-Xlinker /opt/intel/mpi-rt/5.0/intel64/lib/release_mt -Xlinker -rpath -Xlinker
/opt/intel/mpi-rt/5.0/intel64/lib -lmpifort -lmpi -lmpigi -ldl -lrt -lpthread

Launch programs

Command srun launches mpi programs.

DON’T USE mpirun AND mpiexec

Intel MPI Runtime Environment Variables

Control MPI behavior.

Variable Value Description
I_MPI_DEBUG 0-5 Print out debugging informationi when MPI program starts running.
I_MPI_PLATFORM ivb Optimize for the Intel® Xeon® Processors formerly code named Ivy Bridge
I_MPI_PERHOST N/allcores Define process layout, N processes per node, allcores on a node.
I_MPI_PIN on/off Turn on/off process pinning.
I_MPI_PIN_PROCESSOR_LIST Get Help Define a processor subset and the mapping rules for MPI processes within this subset.
I_MPI__PIN_DOMAIN Get Help control process pinning for hybrid MPI/OpenMP applications
I_MPI_FABRICS shm:dapl Network fabrics to be used
I_MPI_EAGER_THRESHOLD [nbytes] Change the eager/rendezvous message size threshold for all devices, default 262144 bytes
  • If the I_MPI_PIN_DOMAIN environment variable is defined, then the I_MPI_PIN_PROCESSOR_LIST environment variable setting is ignored.

OpenMPI

Open Source High Performance Computing

OpenMPI website

Available versions:

module avail openmpi

------------------------------------------------------------ /apps/modulefiles/parallel ------------------------------------------------------------
openmpi/1.10.0/gnu   openmpi/1.10.2/intel openmpi/1.10.5/gnu   openmpi/1.8.5/intel  openmpi/2.0.0/intel  openmpi/2.0.3/gnu    openmpi/2.1.1/intel
openmpi/1.10.0/intel openmpi/1.10.3/gnu   openmpi/1.10.5/intel openmpi/1.8.7/gnu    openmpi/2.0.1/gnu    openmpi/2.0.3/intel  openmpi/2.1.2/gnu
openmpi/1.10.1/gnu   openmpi/1.10.3/intel openmpi/1.10.7/gnu   openmpi/1.8.7/intel  openmpi/2.0.1/intel  openmpi/2.1.0/gnu    openmpi/2.1.2/intel
openmpi/1.10.1/intel openmpi/1.10.4/gnu   openmpi/1.10.7/intel openmpi/1.8.8        openmpi/2.0.2/gnu    openmpi/2.1.0/intel  openmpi/3.0.0/gnu
openmpi/1.10.2/gnu   openmpi/1.10.4/intel openmpi/1.8.5/gnu    openmpi/2.0.0/gnu    openmpi/2.0.2/intel  openmpi/2.1.1/gnu    openmpi/3.0.0/intel

For each version there two compiled flavors for openmpi, gnu and intel

To select underlying compiler just load accordingly the module flavor you need.

Language wrapper GNU module INTEL module
C mpicc openmpi/[version]/gnu openmpi/[version]/intel
C++ mpicxx openmpi/[version]/gnu openmpi/[version]/intel
FORTRAN mpif90 openmpi/[version]/gnu openmpi/[version]/intel

For example in order to use openMPI with gcc/4.9.2 underlying compiler consider load the gnu openmpi flavor

module load gnu/4.9.2
module load openmpi/1.8.5/gnu

mpifort -show
gfortran -I/apps/parallel/openmpi/1.8.5/gnu/include -pthread
-I/apps/parallel/openmpi/1.8.5/gnu/lib -Wl,-rpath
-Wl,/apps/parallel/openmpi/1.8.5/gnu/lib -Wl,--enable-new-dtags
-L/apps/parallel/openmpi/1.8.5/gnu/lib -lmpi_usempif08 -lmpi_usempi_ignore_tkr
-lmpi_mpifh -lmpi

Respectively openMPI with icc/15.0.3

module load intel/15.0.3
module load openmpi/1.8.5/intel

mpicc -show
icc -I/apps/parallel/openmpi/1.8.5/intel/include -pthread -Wl,-rpath
-Wl,/apps/parallel/openmpi/1.8.5/intel/lib -Wl,--enable-new-dtags
-L/apps/parallel/openmpi/1.8.5/intel/lib -lmpi

Launch programs

Command srun launches mpi programs.

DON’T USE mpirun AND mpiexec

General run-time tuning