Difference between revisions of "Building on RHEL"
(→Building Global Arrays) |
m (→Preliminaries) |
||
Line 1: | Line 1: | ||
== Preliminaries == | == Preliminaries == | ||
− | The Red Hat Enterprise Linux (RHEL) operating system is found on many Linux workstations. The instructions and scripts on this page have been tested on a workstation using RHEL 7.5 with GNU 4.8.5 compilers. These scripts will probably work with other versions of RHEL and other versions of GNU with minor modifications, but they have been explicitly verified for this operating system. We also include some notes on building GridPACK using Intel compilers. The scripts described below have been verified for the Boost 1.65.0, PETSc 3.7.6 and Global arrays 5.6. They should work for other versions with little or no adjustments. | + | The Red Hat Enterprise Linux (RHEL) operating system is found on many Linux workstations. The instructions and scripts on this page have been tested on a workstation using RHEL 7.5 with GNU 4.8.5 compilers. These scripts will probably work with other versions of RHEL and other versions of GNU with minor modifications, but they have been explicitly verified for this operating system. We also include some notes on building GridPACK using Intel compilers. The scripts described below have been verified for the Boost 1.65.0, PETSc 3.7.6 and Global arrays 5.6. They should work for other versions with little or no adjustments. This build is very similar to the build on a [[Building_on_a_Linux_Cluster | Linux cluster]]. The main difference is that in this build, all libraries and include files are installed in a single directory. For the cluster build, libraries are installed in separate directories. Which choice to use is mostly up to user preference. |
This build assumes that all library and include files are located in a directory <tt>/home/palmer/software/linux64</tt>. This directory name is just an arbitrary example and users should select a directory that reflects their own system when building external libraries. The library directory can be specified for all builds by defining an environment variable <tt>PREFIX</tt>. If you are using the C-shell environment, then this variable can be set using | This build assumes that all library and include files are located in a directory <tt>/home/palmer/software/linux64</tt>. This directory name is just an arbitrary example and users should select a directory that reflects their own system when building external libraries. The library directory can be specified for all builds by defining an environment variable <tt>PREFIX</tt>. If you are using the C-shell environment, then this variable can be set using | ||
Line 19: | Line 19: | ||
PREFIX=/home/palmer/software/linux64 | PREFIX=/home/palmer/software/linux64 | ||
− | in the output. The <tt>PREFIX</tt> variable is used to direct the output of the different builds to the same place. Alternatively, you can install libraries in individual directories. | + | in the output. The <tt>PREFIX</tt> variable is used to direct the output of the different builds to the same place. Alternatively, you can install libraries in individual directories. See the documentation on building GridPACK on a [[Building_on_a_Linux_Cluster | Linux cluster]] for instructions on how to do this. |
Before starting any builds, the following variables need to be defined in the environment | Before starting any builds, the following variables need to be defined in the environment |
Revision as of 15:57, 5 June 2018
Contents
Preliminaries
The Red Hat Enterprise Linux (RHEL) operating system is found on many Linux workstations. The instructions and scripts on this page have been tested on a workstation using RHEL 7.5 with GNU 4.8.5 compilers. These scripts will probably work with other versions of RHEL and other versions of GNU with minor modifications, but they have been explicitly verified for this operating system. We also include some notes on building GridPACK using Intel compilers. The scripts described below have been verified for the Boost 1.65.0, PETSc 3.7.6 and Global arrays 5.6. They should work for other versions with little or no adjustments. This build is very similar to the build on a Linux cluster. The main difference is that in this build, all libraries and include files are installed in a single directory. For the cluster build, libraries are installed in separate directories. Which choice to use is mostly up to user preference.
This build assumes that all library and include files are located in a directory /home/palmer/software/linux64. This directory name is just an arbitrary example and users should select a directory that reflects their own system when building external libraries. The library directory can be specified for all builds by defining an environment variable PREFIX. If you are using the C-shell environment, then this variable can be set using
setenv PREFIX /home/palmer/software/linux64
If you are using the Bourne-shell, then this variable can be set using
export PREFIX=/home/palmer/software/linux64
Using either shell, you can verify that this variable is present and has been set correctly by typing
env | grep PREFIX
You should see the line
PREFIX=/home/palmer/software/linux64
in the output. The PREFIX variable is used to direct the output of the different builds to the same place. Alternatively, you can install libraries in individual directories. See the documentation on building GridPACK on a Linux cluster for instructions on how to do this.
Before starting any builds, the following variables need to be defined in the environment
setenv CC gcc setenv CFLAGS "-pthread" setenv CXX g++ setenv CXXFLAGS "-pthread" setenv FC gfortran setenv FCFLAGS "-pthread"
Again, this is for a C-shell environment. For the Bourne-shell, use
export CC=gcc export CFLAGS="-pthread" export CXX=g++ export CXXFLAGS="-pthread" export FC=gfortran export FCFLAGS="-pthread"
If you are using Intel compilers, the environment settings are
setenv CC icc setenv CFLAGS "-pthread" setenv CXX icpc setenv CXXFLAGS "-pthread" setenv FC ifort setenv FCFLAGS "-pthread"
Building CMake
CMake is most likely already included on your system. However, if it is not, or the version is too old, you can build it using the script below. This has been used to build CMake version 2.8.12. Information on downloading tar files can be found here.
tar xvzf cmake-2.8.12.tar.gz cd cmake-2.8.12 ./bootstrap --prefix=$PREFIX make make test make install
Two tests failed:
The following tests FAILED: 25 - FindPackageTest (Failed) 230 - CMakeOnly.AllFindModules (Failed) Errors while running CTest
This does not seem to affect configuration of GridPACK.
Building MPI
For most workstations, it is likely that you will need to build MPI yourself. The script below has been used to build OpenMPI 1.10.2 using the GNU 4.8.5 compilers. Information on downloading tar files can be found here.
configure --enable-shared=no --enable-static=yes \ --prefix="$PREFIX" CC=gcc CXX=g++ FC=gfortran make -j 4 make check make install
You can configure Open MPI using Intel compilers with the command
./configure --enable-shared=no\ --enable-static=yes\ --prefix="$PREFIX" \ CC=icc CXX=icpc F77=ifort FC=ifort
Note that the environment variables are declared as part of the configure command in this example, but they could also be set using the setenv
command. After building MPI, you need to make sure that MPI is in your path by setting the following environment variables. These are usually set in your .cshrc or .bashrc files so that they are automatically in your environment whenever you log in.
setenv PATH /home/palmer/software/openmpi-1.10.2/bin:${PATH} setenv MANPATH /home/palmer/software/openmpi-1.10.2/share/man:${MANPATH} setenv LD_LIBRARY_PATH /home/palmer/software/openmpi-1.10.2/lib:${LD_LIBRARY_PATH}
If your environment is correctly set, you should have functions like mpicc and mpicxx in your path.
Building Boost
In Boost 1.54, Boost.Log was added. This uses some compiler capabilities not supported by the compilers that come with older versions RHEL, so Boost.Log is disabled. Boost seems to work fine this way with older versions of RHEL. The following script has been verified with Boost 1.65.0, but should work with many other versions. Information on downloading tar files can be found here.
echo "using mpi ;" > ~/user-config.jam sh ./bootstrap.sh \ --prefix="$PREFIX" \ --without-icu \ --with-toolset=gcc \ --without-libraries=python,log ./b2 -a -d+2 link=static stage ./b2 -a -d+2 link=static install rm ~/user-config.jam
To build using the Intel compilers, substitute --with-toolset=intel-linux
for --with-toolset=gcc
. You may also run into problems with the name of the MPI wrapper for the C++ compiler. If it looks like configure is not finding mpic++
then replace the first line in the above script with
echo "using mpi : /absolute/path/to/mpi/C++/wrapper ;" > ~/user-config.jam
Make sure you include the spaces around ":
" and before ";
".
Boost has a tendency to use cutting-edge features of the C++ compiler so it is a good idea to use a compiler version that was released at the same time as the Boost version you are working with. If you are having problems, you may have better luck moving to an earlier version of Boost. If the Boost build fails, you should delete the entire boost directory and start from scratch after making corrections to your build script. Restarting a failed Boost build does not appear to work in most instances.
Building PETSc
It is a good idea to include SuperLU in PETSc. GridPACK also requires ParMETIS and METIS, so you should include them as part of PETSc instead of building them separately (see below). GridPACK works with several recent versions of PETSc, including versions 3.4-3.8. This example uses PETSc version 3.7.6 and was built with SuperLU, ParMETIS and METIS. The SuiteSparse libraries are also included in this build. Information on downloading tar files can be found here.
./configure \ PETSC_ARCH=arch-linux2-complex-opt \ --with-prefix=./ \ --with-mpi=1 \ --with-c++-support=1 \ --with-c-support=0 \ --with-fortran=0 \ --with-scalar-type=complex \ --with-fortran-kernels=generic \ --download-superlu_dist \ --download-superlu \ --download-parmetis \ --download-metis \ --download-suitesparse \ --download-f2cblaslapack=1 \ --with-clanguage=c++ \ --with-shared-libraries=0 \ --with-x=0 \ --with-mpirun=mpirun \ --with-mpiexec=mpiexec \ --prefix="$PREFIX/petsc-3.7.6" \ --with-debugging=1
The use of the --prefix option can be a little tricky. Before configuring, make sure that the directory corresponding to the PETSC_ARCH variable has been removed from the directory (this is different from the install directory and should be directly under the top level PETSc directory). After running configure, the configure script will print out a message
xxx=========================================================================xxx Configure stage complete. Now build PETSc libraries with (gnumake build): make PETSC_DIR=/home/palmer/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all xxx=========================================================================xxx
Users can cut-and-paste the make command and run it on the command prompt. If this command is successful, just keep following instructions from the PETSc build for the next step. The commands that need to be executed are
make PETSC_DIR=/home/palmer/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all make PETSC_DIR=/home/palmer/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt install
Once these commands are completed, there should be a petsc-3.7.6 directory under your PREFIX directory.
Building ParMETIS
It's easiest to include ParMETIS in the PETSc build. The GridPACK configuration will recognize and use ParMETIS from the PETSc installation. However, if you want to build ParMETIS separately, instructions are below. Information on downloading tar files can be found here.
In order to get ParMETIS 4.0 to compile with older GNU compilers, a warning option needs to be removed from one of the build system files. In the top ParMETIS source directory, execute the following command:
sed -i.org -e 's/-Wno-unused-but-set-variable//g' metis/GKlib/GKlibSystem.cmake
Starting in the ParMETIS source directory, build and install METIS first:
cd metis make config prefix="$PREFIX" make make install
then build and install ParMETIS:
cd .. make config cc=mpicc cxx=mpicxx prefix="$PREFIX" make make install
Do some tests to make sure it works:
cd Graphs mpirun -np 4 ptest rotor.graph rotor.graph.xyz mpirun -np 4 ptest rotor.graph mpirun -np 8 ptest bricks.hex3d
The last test appears to hang but does not to otherwise cause any problems.
Building Global Arrays
GA releases are available from here. Information on downloading tar files can be found here. Download GA into a directory GA_HOME
. This configuration command should work for both GNU and Intel compilers if you have the CC, CXX, FC, F77 environment variables set.
cd $(GA_HOME) ./configure --with-mpi-ts --disable-f77 --without-blas --enable-cxx --enable-i4 --prefix="$PREFIX" make make install
This configure scripte builds Global Arrays with the two-sided messaging runtime. This runtime is suitable for small numbers of processors but has poor performance on larger systems. In this case, the progress ranks runtime should be used. See the page on building on a Linux cluster for more details.
Building GridPACK
It is a good idea to build GridPACK in a separate directory under the GridPACK source tree. The example below assumes that a directory called build has been created under $GRIDPACK/src and that you have cd'd into this directory.
rm -rf CMake*
cmake -Wdev \ -D BOOST_ROOT:STRING=$PREFIX \ -D PETSC_DIR:STRING=$PREFIX/petsc-3.7.6 \ -D GA_DIR:STRING=$PREFIX \ -D USE_PROGRESS_RANKS:BOOL=FALSE \ -D MPI_CXX_COMPILER:STRING='mpicxx' \ -D MPI_C_COMPILER:STRING='mpicc' \ -D MPIEXEC:STRING='mpiexec' \ -D CMAKE_INSTALL_PREFIX:PATH='$GRIDPACK/src/build/install' \ -D CMAKE_BUILD_TYPE:STRING='RELWITHDEBINFO' \ -D MPIEXEC_MAX_NUMPROCS:STRING="2" \ -D CMAKE_VERBOSE_MAKEFILE:STRING=TRUE \ ..
Note the .. at the end of the cmake command. These are pointing to the top level CMakeLists.txt file for the GridPACK build, which is located in the $GRIDPACK/src directory. If you are trying to configure GridPACK from a directory that is not directly underneath $GRIDPACK/src, this location in the cmake command will need to be modified.
The rm -rf CMake* at the beginning of this script removes any CMake files that might be left over from a previous attempt at configuring GridPACK. If the GridPACK configuration fails for any reason, it is a good idea to remove these files before trying to configure again.
If the cmake command is successful, GridPACK can be configured and installed by typing
make make install