OpenMPI
From charlesreid1
NOTE: Information on this page may be out of date.
Contents
Installing
Prerequisites
OpenMPI creates Fortran bindings. If you need these, you'll have to install a Fortran compiler (I use Gfortran on Mac, the Gfortran page gives details about installing gfortran on Mac).
Since Fortran is a mess on Mac OS X, and I never use Fortran anyway, I didn't want to do this. So, I used the --disable-mpi-f77
configure option.
Mac Sierra 10.11
To install OpenMPI on a Mac:
brew install open-mpi
To use MPI via Jupyter/iPython notebook, see Jupyter/MPI
Red Hat Linux
You can install OpenMPI on pretty much any flavor of Linux as follows:
#!/bin/sh
./configure \
--prefix=/path/to/openmpi/1.4.3 \
CXX="/usr/bin/g++" \
CC="/usr/bin/gcc"
Mac Leopard (OS X 10.5)
For 10.5, I installed LAM MPI. It has since been dropped in favor of OpenMPI, but I haven't gotten around to installing OpenMPI on that machine.
Mac Snow Leopard (OS X 10.6)
The following is my configure line to configure OpenMPI:
#!/bin/sh
./configure \
--prefix=${HOME}/pkg/openmpi/1.4.3 \
--disable-mpi-f77
\
CC="/usr/bin/gcc" \
CXX="/usr/bin/g++" \
\
CXXFLAGS="-arch x86_64" \
CCFLAGS="-arch x86_64" \
Note that OpenMPI is picky about compilers.
First, you have to specify exactly which compiler you want to use: you shouldn't say gcc
, you should say /usr/bin/gcc
.
Second, you have to specify architecture flags for all of the compilers. This is important, because otherwise the compilers will have problems by either producing 32-bit binaries, or maybe one compiler will produce a 64-bit binary and another will produce a 32-bit binary, etc.
NOTE: If you want to build both 32-bit and 64-bit binaries, see the #Building Fat Binaries section below.
Building Fat Binaries
A fat binary is a binary that has been built to run on multiple architectures. In the case of Mac Snow Leopard, a fat binary would be a 32-bit or a 64-bit binary.
After struggling with trying to build a fat binary (see #Bad register name below), I found this page on OpenMPI's website: [1]
According to the above page, it's impossible to build a one-pass fat binary, and so you actually have to run the configure/make/make install once for each architecture.
To do this, use the script at /path/to/openmpi_source/contrib/dist/macosx-pkg/buildpackage.sh
The magic of the script consists of two parts:
- the
lipo -create
command, which combines the binaries and libraries created for each architecture into a single universal binary/library - some text manipulation to create a universal header file
The script is run like this:
/path/to/openmpi_source/contrib/dist/macosx-pkg/buildpackage.sh openmpi-x.y.z.tar.gz /path/to/openmpi/prefix
This then creates a zipped .dmg (disk image) file in /tmp, which you can then proceed to install.
You will likely want to edit the script so that it doesn't build for the PPC architecture.
Once you run the buildpackage.sh script, you can double check to make sure it installed correctly. Normally, you could run the command file /path/to/mpic++
, but actually that's just a wrapper for your regular C++ compiler.
To actually verify, open /path/to/mpi/include/mpi.h
and look for two lines:
#ifdef __i386__
and
#ifdef __x86_64__
If they're both there, it means you've got a build and a header file that will work with either architecture.
buildpackage.sh Errors
If you see an error like this:
--> Running configure: /tmp/buildpackage-63899/openmpi-1.4.3/configure CFLAGS="-arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk" CXXFLAGS="-arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk" OBJCFLAGS="-arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk" --prefix=/Users/charles/pkg/openmpi/1.4.3 --disable-mpi-f77 --without-cs-fs --enable-mca-no-build=ras-slurm,pls-slurm,gpr-null,sds-pipe,sds-slurm,pml-cm NM="nm -p" --build=i386-apple-darwin10.7.0 --host=i386-apple-darwin10.7.0 *** Problem running configure - aborting! *** See /tmp/buildpackage-NNNNN/configure.out-i386 for help.
It is most likely because the directory /Developer/SDKs/MacOSX10.4u.sdk
does not exist!
If this is the case, then when you look in /tmp/buildpackage-NNNNN/build-ARCH/config.log
you will see a line like:
configure:6407: checking for C compiler default output file name configure:6429: gcc -DNDEBUG -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk conftest.c >&5 ld: library not found for -lcrt1.10.6.o collect2: ld returned 1 exit status configure:6433: $? = 1 configure:6471: result: configure: failed program was:
The -isysroot
flag for gcc specifies where the default location is for libraries and include files (normally this is /usr/lib
and /usr/include
; see the -sysroot
portion of the gcc man page, accessed by typing man gcc
). Gcc will automatically add the library -lcrt1.10.6.o
when the -isysroot
flag is used, and since buildpackage.sh is pointing at a directory for 10.4, it can't find any libraries related to 10.6.
To fix, change the buildpackage.sh file. The line:
OMPI_SDK="/Developer/SDKs/MacOSX10.4u.sdk"
should become
OMPI_SDK="/Developer/SDKs/MacOSX10.6.sdk"
Errors
The following are errors I encountered while trying to configure and/or build.
C and Fortran 77 Compilers Not Link Compatible
When I first installed OpenMPI on Snow Leopard, I saw an error like this:
checking if C and Fortran 77 are link compatible... no ********************************************************************** It appears that your Fortran 77 compiler is unable to link against object files created by your C compiler. This typically indicates one of a few possibilities: - A conflict between CFLAGS and FFLAGS - A problem with your compiler installation(s) - Different default build options between compilers (e.g., C building for 32 bit and Fortran building for 64 bit) - Incompatible compilers Such problems can usually be solved by picking compatible compilers and/or CFLAGS and FFLAGS. More information (including exactly what command was given to the compilers and what error resulted when the commands were executed) is available in the config.log file in this directory. ********************************************************************** configure: error: C and Fortran 77 compilers are not link compatible. Can not continue.
Digging into config.log showed this error:
configure:35860: checking if C and Fortran 77 are link compatible configure:35910: /usr/bin/gcc -c -O3 -DNDEBUG -finline-functions -fno-strict-aliasing conft est_c.c configure:35917: $? = 0 configure:35944: /usr/local/bin/gfortran -o conftest conftest.f conftest_c.o >&5 ld: warning: in conftest_c.o, file was built for unsupported file format which is not the architecture being linked (i386)
The problem was twofold: first, I needed to compile 64-bit binaries with the C compiler, which wasn't happening, because it was trying to create an i386 binary. Second, the Fortran compiler was producing an "unsupported file format".
I fixed the problem by adding the "-arch x86_64" flag to both the C and Fortran compilers.
Bad register name
Initially, I tried to create MPI compiler wrappers that would create fat binaries that would work with either 32-bit or 64-bit platforms. I used a configure line that specified -arch
for both 32 and 64 bit, like this:
#!/bin/sh
./configure \
--prefix=${HOME}/pkg/openmpi/1.4.3 \
CC="/usr/bin/gcc -arch x86_64 -arch i386" \
CXX="/usr/bin/g++ -arch x86_64 -arch i386" \
F77="/usr/local/bin/gfortran -arch x86_64 -arch i386" \
\
CPP="/usr/bin/gcc -E" \
CXXCPP="/usr/bin/g++ -E"
and configure ran fine but when I got to make I saw this:
atomic-asm.S:5:bad register name `%rbp' atomic-asm.S:6:bad register name `%rsp' atomic-asm.S:13:bad register name `%rbp' atomic-asm.S:14:bad register name `%rsp' atomic-asm.S:21:bad register name `%rbp' atomic-asm.S:22:bad register name `%rsp' atomic-asm.S:30:bad register name `%rdi)' atomic-asm.S:38:bad register name `%rsi' atomic-asm.S:39:bad register name `%rdx' atomic-asm.S:48:bad register name `%rdx' atomic-asm.S:50:bad register name `%rdx' lipo: can't open input file: /var/folders/fk/fkDjOi33E4Og4oM1PTL3kk+++TI/-Tmp-//ccEEm4zh.out (No such file or directory) make[2]: *** [atomic-asm.lo] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all-recursive] Error 1
Googling "bad register name rbp" or any of the other "bad register name" lines above turns up a lot of information about problems when building a 32-bit binary with a compiler in 64-bit mode, or vice-versa. The problem turned out to be a problem with the build system (GNU Autotools) that OpenMPI uses - in some cases it's just not possible to build fat binaries in one-pass. See the #Building Fat Binaries section above for details on how to resolve the issue and build fat binaries.
NOTE: Normally, you can use the techniques specified on the Compiling_Software#Compiler_Specifications page to build fat binaries, but in this case it didn't work.