Building on RHEL
Contents
THIS PAGE IS UNDER CONSTRUCTION
Preliminaries
The Red Hat Enterprise Linux (RHEL) operating system is found on many Linux workstations. The instructions and scripts on this page have been tested on a workstation using RHEL 7.5 with GNU 4.8.5 compilers. These scripts will probably work with other versions of RHEL and other versions of GNU with minor modifications, but they have been explicitly verified for this operating system. We also include some notes on building GridPACK using Intel compilers.
The build described below assumes that all library and include files are located in a directory /home/palmer/software/linux64. This directory name reflects a particular system and users should select a directory that reflects their own system when building external libraries. The build below assumes that an environment variable PREFIX has been set. If you are using the C-shell environment, then this variable can be set using
setenv PREFIX /home/palmer/software/linux64
If you are using the Bourne-shell, then this variable can be set using
export PREFIX=/home/palmer/software/linux64
Using either shell, you can verify that this variable is present and has been set correctly by typing
env | grep PREFIX
You should see the line
PREFIX=/home/palmer/software/linux64
in the output. The PREFIX variable can be used to direct the output of the different builds to the same place.
Before starting any builds, the following variables need to be defined in the environment
Since the software on RHEL5 is old, most things have to be built from scratch and installed somewhere outside of the system directories. However, the documentation below is a good starting point for building on later versions of RHEL and other Linux systems. On this system, packages that were built were installed in a single directory, which will be denoted as $prefix
here. We discuss how to build the libraries for both GNU and Intel compilers. The GNU 4.1.2 compilers are available by default. If you use Intel compilers, you should set the environment variables
setenv CC icc setenv CXX icpc setenv FC ifort setenv F77 ifort setenv CFLAGS "-pthread" setenv CXXFLAGS "-pthread" setenv FCFLAGS "-pthread" setenv F77FLAGS "-pthread"
These are the c-shell directives. Modify accordingly if you are using some other shell.
Building CMake
CMake version 2.8.12 was built for RHEL5 as follows
tar xvzf cmake-2.8.12.tar.gz cd cmake-2.8.12 ./bootstrap --prefix=$prefix make make test make install
Two tests failed:
The following tests FAILED: 25 - FindPackageTest (Failed) 230 - CMakeOnly.AllFindModules (Failed) Errors while running CTest
This does not seem to affect configuration of GridPACK.
Building MPI
These are the commands for building OpenMPI using the GNU 4.4 compilers
setenv CC gcc4 setenv CXX g++4 setenv F77 gfortran setenv FC gfortran configure --enable-shared=no --enable-static=yes \ --prefix="$prefix" make -j 4 make check make install
This works for the 1.8.2 release. You can configure using the Intel compilers with the command
./configure --enable-shared=no\ --enable-static=yes\ --prefix="$prefix" \ CC=icc CXX=icpc F77=ifort FC=ifort
Note that the environment variables are declared as part of the configure command in this example, but the could also be set using the setenv
command.
Building Boost
In Boost 1.54, Boost.Log was added. This uses some compiler capabilities not supported by the ancient RHEL5 compiler, so Boost.Log is disabled. Boost seems to work fine this way on RHEL5.
echo "using mpi ;" > ~/user-config.jam sh ./bootstrap.sh \ --prefix="$prefix" \ --without-icu \ --with-toolset=gcc \ --without-libraries=python,log ./b2 -a -d+2 link=static stage ./b2 -a -d+2 link=static install rm ~/user-config.jam
To build using the Intel compilers, substitute --with-toolset=intel-linux
for --with-toolset=gcc
. You may also run into problems with the name of the MPI wrapper for the C++ compiler. If it looks like configure is not finding mpic++
then replace the first line in the above script with
echo "using mpi : /absolute/path/to/mpi/C++/wrapper ;" > ~/user-config.jam
Make sure you include the spaces around ":
" and before ";
".
This script appears to work with later versions of Boost. Boost has a tendency to use cutting-edge features of the C++ compiler so it is a good idea to use a compiler version that was released at the same time as the Boost version you are working with. If you are having problems, you may better luck moving to an earlier version of Boost. If the Boost build fails, you should delete the entire boost directory and start from scratch. Restarting a failed Boost build does not appear to work in most instances.
Building PETSc
It is a good idea to include SuperLU in PETSc. GridPACK also requires ParMETIS and METIS, so you should include them as part of PETSc instead of building them separately (see below). GridPACK works with several recent versions of PETSc, including versions 3.4-3.7. This example uses PETSc version 3.4.2 and was built with SuperLU:
set path = ("$prefix/bin" $path ) setenv PETSC_DIR $prefix/../petsc-3.4.2 unsetenv PETSC_ARCH python ./config/configure.py \ PETSC_ARCH=arch-linux2-complex-opt \ --with-prefix="$prefix" \ --with-mpi=1 \ --with-cc=mpicc \ --with-fc=mpif90 \ --with-cxx=mpicxx \ --with-c++-support=1 \ --with-c-support=0 \ --with-fortran=0 \ --with-scalar-type=complex \ --with-fortran-kernels=generic \ --download-superlu_dist \ --download-parmetis \ --download-metis \ --with-clanguage=c++ \ --with-shared-libraries=0 \ --with-dynamic-loading=0 \ --with-x=0 \ --with-mpirun=mpirun \ --with-mpiexec=mpiexec \ --with-debugging=0 make PETSC_DIR=$prefix/../petsc-3.4.2 PETSC_ARCH=arch-linux2-complex-opt all make PETSC_DIR=$prefix/../petsc-3.4.2 PETSC_ARCH=arch-linux2-complex-opt test
Building ParMETIS
It's easiest to include ParMETIS in the PETSc build. The GridPACK configuration will recognize and use ParMETIS from the PETSc installation. However, if you want to build ParMETIS separately, instructions are below.
In order to get ParMETIS 4.0 to compile with older GNU compilers, a warning option needs to be removed from one of the build system files. In the top ParMETIS source directory, execute the following command:
sed -i.org -e 's/-Wno-unused-but-set-variable//g' metis/GKlib/GKlibSystem.cmake
Starting in the ParMETIS source directory, build and install METIS first:
cd metis make config prefix="$prefix" make make install
then build and install ParMETIS:
cd .. make config cc=mpicc cxx=mpicxx prefix="$prefix" make make install
Do some tests to make sure it works:
cd Graphs mpirun -np 4 ptest rotor.graph rotor.graph.xyz mpirun -np 4 ptest rotor.graph mpirun -np 8 ptest bricks.hex3d
The last one seemed to hang.
Building Global Arrays
Download GA into a directory GA_HOME
. This configuration command should work for both GNU and Intel compilers if you have the CC, CXX, FC, F77 environment variables set.
cd $(GA_HOME) ./configure --with-mpi-ts --disable-f77 --without-blas --enable-cxx --enable-i4 --prefix=$(GA_HOME) make make install
Building GridPACK
It is a good idea to build GridPACK in a separate directory. The example below assumes that a build directory has been created under $GRIDPACK/src and that you have cd'd into this directory.
$prefix/bin/cmake -Wno-dev \ -D BOOST_ROOT:STRING='/net/flophouse/files0/perksoft/boost-1.53.0' \ -D PETSC_DIR:STRING='/net/flophouse/files0/perksoft/petsc-3.3-p3' \ -D PETSC_ARCH:STRING='arch-linux2-cxx-opt' \ -D PARMETIS_DIR:STRING='/net/flophouse/files0/perksoft/parmetis-4.0' \ -D GA_DIR:STRING='/net/flophouse/files0/perksoft/ga-5-2' \ -D MPI_CXX_COMPILER:STRING="$prefix/bin/mpicxx" \ -D MPI_C_COMPILER:STRING="$prefix/bin/mpicc" \ -D MPIEXEC:STRING="$prefix/bin/mpiexec" \ -D CMAKE_BUILD_TYPE:STRING="Debug" \ -D CMAKE_VERBOSE_MAKEFILE:BOOL=TRUE \ ..