Difference between revisions of "Building on RHEL"

From GridPACK
Jump to: navigation, search
(Building PETSc)
 
(37 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
<!--
 +
<span style="color:red">'''This page is currently under development. Please excuse the mess.'''</span>
 +
-->
 +
 +
These instructions have been superseded by those [[Building_on_CentOS7| here]].  There still may be useful information, however, so they remain online.  <span style="color:red">Read and follow and your own risk!</span>
 +
 
== Preliminaries ==
 
== Preliminaries ==
  
 
The Red Hat Enterprise Linux (RHEL) operating system is found on many Linux workstations. The instructions and scripts on this page have been tested on a workstation using RHEL 7.5 with GNU 4.8.5 compilers. These scripts will probably work with other versions of RHEL and other versions of GNU with minor modifications, but they have been explicitly verified for this operating system. We also include some notes on building GridPACK using Intel compilers. The scripts described below have been verified for the Boost 1.65.0, PETSc 3.7.6 and Global arrays 5.6. They should work for other versions with little or no adjustments. This build is very similar to the build on a [[Building_on_a_Linux_Cluster | Linux cluster]]. The main difference is that in this build, all libraries and include files are installed in a single directory. For the cluster build, libraries are installed in separate directories. Which choice to use is mostly up to user preference.
 
The Red Hat Enterprise Linux (RHEL) operating system is found on many Linux workstations. The instructions and scripts on this page have been tested on a workstation using RHEL 7.5 with GNU 4.8.5 compilers. These scripts will probably work with other versions of RHEL and other versions of GNU with minor modifications, but they have been explicitly verified for this operating system. We also include some notes on building GridPACK using Intel compilers. The scripts described below have been verified for the Boost 1.65.0, PETSc 3.7.6 and Global arrays 5.6. They should work for other versions with little or no adjustments. This build is very similar to the build on a [[Building_on_a_Linux_Cluster | Linux cluster]]. The main difference is that in this build, all libraries and include files are installed in a single directory. For the cluster build, libraries are installed in separate directories. Which choice to use is mostly up to user preference.
  
This build assumes that all library and include files are located in a directory <tt>/home/palmer/software/linux64</tt>. This directory name is just an arbitrary example and users should select a directory that reflects their own system when building external libraries. The library directory can be specified for all builds by defining an environment variable <tt>PREFIX</tt>. If you are using the C-shell environment, then this variable can be set using
+
This build assumes that all library and include files are located in a directory <tt>/home/userA/software/linux64</tt> and that all library tar files have been downloaded into a directory <tt>/home/userA/software</tt>. These directory names are just an arbitrary example and users should select directories that reflect their own system when building external libraries. Both directories will need to be created by the user before attempting the rest of the build. The library directory can be specified for all builds by defining an environment variable <tt>PREFIX</tt>. If you are using the C-shell environment, then this variable can be set using
  
   setenv PREFIX /home/palmer/software/linux64
+
   setenv PREFIX /home/userA/software/linux64
  
 
If you are using the Bourne-shell, then this variable can be set using
 
If you are using the Bourne-shell, then this variable can be set using
  
   export PREFIX=/home/palmer/software/linux64
+
   export PREFIX=/home/userA/software/linux64
  
 
Using either shell, you can verify that this variable is present and has been set correctly by typing
 
Using either shell, you can verify that this variable is present and has been set correctly by typing
Line 17: Line 23:
 
You should see the line
 
You should see the line
  
   PREFIX=/home/palmer/software/linux64
+
   PREFIX=/home/userA/software/linux64
  
 
in the output. The <tt>PREFIX</tt> variable is used to direct the output of the different builds to the same place. Alternatively, you can install libraries in individual directories. See the documentation on building GridPACK on a [[Building_on_a_Linux_Cluster | Linux cluster]] for instructions on how to do this.
 
in the output. The <tt>PREFIX</tt> variable is used to direct the output of the different builds to the same place. Alternatively, you can install libraries in individual directories. See the documentation on building GridPACK on a [[Building_on_a_Linux_Cluster | Linux cluster]] for instructions on how to do this.
Line 28: Line 34:
 
       setenv CXXFLAGS "-pthread"
 
       setenv CXXFLAGS "-pthread"
 
       setenv FC gfortran
 
       setenv FC gfortran
       setenv FCFLAGS "-pthread"
+
       setenv FFLAGS "-pthread"
  
 
Again, this is for a C-shell environment. For the Bourne-shell, use
 
Again, this is for a C-shell environment. For the Bourne-shell, use
Line 37: Line 43:
 
       export CXXFLAGS="-pthread"
 
       export CXXFLAGS="-pthread"
 
       export FC=gfortran
 
       export FC=gfortran
       export FCFLAGS="-pthread"
+
       export FFLAGS="-pthread"
  
 
If you are using Intel compilers, the environment settings are
 
If you are using Intel compilers, the environment settings are
Line 46: Line 52:
 
       setenv CXXFLAGS "-pthread"
 
       setenv CXXFLAGS "-pthread"
 
       setenv FC ifort
 
       setenv FC ifort
       setenv FCFLAGS "-pthread"
+
       setenv FFLAGS "-pthread"
 +
<!--
 +
== Installing CMake, MPI and Boost ==
 +
 
 +
This install assumes that you have sudo or root privileges on your machine and can use the yum installer. If you do not, or you find that you cannot successfully install CMake, MPI, or Boost, instructions for building [[Building_on_RHEL#Building_CMake | CMake]], [[Building_on_RHEL#Building_MPI | MPI]] and [[Building_on_RHEL#Buidling_Boost | Boost]] by hand are given below.
 +
 
 +
It is very likely that your workstation will already have cmake installed. The GridPACK build requires version 2.8.12 or greater. You can check the CMake version by typing
 +
 
 +
  cmake --version
 +
 
 +
If <tt>cmake</tt> is not on your system or the version is too old, you can update CMake using the command
 +
-->
  
 
== Building CMake ==
 
== Building CMake ==
  
[http://www.cmake.org/ CMake] is most likely already included on your system. However, if it is not, or the version is too old, you can build it using the script below. This has been used to build CMake version 2.8.12. Information on downloading tar files can be found [[Software Required to Build GridPACK#Linux_Help | here]].
+
[http://www.cmake.org/ CMake] is most likely already included on your system. However, if it is not, or the version is too old, you can build it using the commands below. You can test for CMake on your system by typing
 +
 
 +
  which cmake
 +
 
 +
You should see something like
 +
 
 +
  /usr/bin/cmake
 +
 
 +
If you get a message saying that command is not found, then CMake is not on your system and you will need to build it. You can check the CMake version by typing
 +
 
 +
  cmake -version
 +
 
 +
If the version is older than 2.8.12, you will need to compile a newer version.
 +
 
 +
The commands below have been used to build CMake version 2.8.12. You will need to start by downloading the CMake tar file (e.g. cmake-2.8.12.tar.gz) from the [https://cmake.org/download/ download site]. Information on downloading tar files can be found [[Software Required to Build GridPACK#Linux_Help | here]].
  
 
   tar xvzf cmake-2.8.12.tar.gz
 
   tar xvzf cmake-2.8.12.tar.gz
Line 59: Line 90:
 
   make install
 
   make install
  
Two tests failed:
+
Some tests may fail. In our tests, we saw
  
 
   The following tests FAILED:
 
   The following tests FAILED:
Line 70: Line 101:
 
== Building MPI ==
 
== Building MPI ==
  
For most workstations, it is likely that you will need to build MPI yourself. The script below has been used to build [http://www.open-mpi.org/software/ompi/v1.10/ OpenMPI 1.10.2] using the GNU 4.8.5 compilers. Information on downloading tar files and creating scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]].
+
For most workstations, it is likely that you will need to build MPI yourself. The script below has been used to build [http://www.open-mpi.org/software/ompi/v1.10/ OpenMPI 1.10.2] using the GNU 4.8.5 compilers. You will need to download the openmpi-1.10.2.tar.gz file from the [http://www.open-mpi.org/software/ompi/v1.10/ download page] and then untar it into a local directory. Other versions of OpenMPI will probably work just as well and you should pick a recent version with the latest bug fixes. Information on downloading tar files and creating scripts on Linux can be found [[Software Required to Build GridPACK#Linux_Help | here]].
  
     configure --enable-shared=no --enable-static=yes \
+
     ./configure --enable-shared=no --enable-static=yes \
 
         --prefix="$PREFIX" CC=gcc CXX=g++ FC=gfortran
 
         --prefix="$PREFIX" CC=gcc CXX=g++ FC=gfortran
 
     make -j 4
 
     make -j 4
Line 87: Line 118:
 
Note that the environment variables are declared as part of the configure command in this example, but they could also be set using the <code>setenv</code> command. After building MPI, you need to make sure that MPI is in your path by setting the following environment variables. These are usually set in your <tt>.cshrc</tt> or <tt>.bashrc</tt> files so that they are automatically in your environment whenever you log in. If you are using the c-shell, add the commands
 
Note that the environment variables are declared as part of the configure command in this example, but they could also be set using the <code>setenv</code> command. After building MPI, you need to make sure that MPI is in your path by setting the following environment variables. These are usually set in your <tt>.cshrc</tt> or <tt>.bashrc</tt> files so that they are automatically in your environment whenever you log in. If you are using the c-shell, add the commands
  
     setenv PATH /home/palmer/software/openmpi-1.10.2/bin:${PATH}
+
     setenv PATH /home/userA/software/linux64/bin:${PATH}
     setenv MANPATH /home/palmer/software/openmpi-1.10.2/share/man:${MANPATH}
+
     setenv MANPATH /home/userA/software/linux64/share/man:${MANPATH}
     setenv LD_LIBRARY_PATH /home/palmer/software/openmpi-1.10.2/lib:${LD_LIBRARY_PATH}
+
     setenv LD_LIBRARY_PATH /home/userA/software/linux64/lib:${LD_LIBRARY_PATH}
  
 
to your <tt>.cshrc</tt> file. If you are using the Bourne shell, use the commands
 
to your <tt>.cshrc</tt> file. If you are using the Bourne shell, use the commands
  
     export PATH=/home/palmer/software/openmpi-1.10.2/bin:${PATH}
+
     export PATH=/home/userA/software/linux64/bin:${PATH}
     export MANPATH=/home/palmer/software/openmpi-1.10.2/share/man:${MANPATH}
+
     export MANPATH=/home/userA/software/linux64/share/man:${MANPATH}
     export LD_LIBRARY_PATH=/home/palmer/software/openmpi-1.10.2/lib:${LD_LIBRARY_PATH}
+
     export LD_LIBRARY_PATH=/home/userA/software/linux64/lib:${LD_LIBRARY_PATH}
  
 
in your <tt>.bashrc</tt> file. If your environment is correctly set, you should have functions like <tt>mpicc</tt> and <tt>mpicxx</tt> in your path. You can test for this by typing
 
in your <tt>.bashrc</tt> file. If your environment is correctly set, you should have functions like <tt>mpicc</tt> and <tt>mpicxx</tt> in your path. You can test for this by typing
Line 101: Line 132:
 
     which mpicc
 
     which mpicc
  
in the Linux command prompt. You should see a path pointing to <tt>mpicc</tt>.
+
in the Linux command prompt. You should see a path pointing to <tt>mpicc</tt>. In this example it would be
 +
 
 +
    /home/userA/software/linux64/bin/mpicc
  
 
== Building Boost ==
 
== Building Boost ==
  
In Boost 1.54, [http://www.boost.org/doc/libs/1_54_0/libs/log/doc/html/index.html Boost.Log] was added. This uses some compiler capabilities not supported by the compilers that come with older versions RHEL, so [http://www.boost.org/doc/libs/1_54_0/libs/log/doc/html/index.html Boost.Log] is disabled.  Boost seems to work fine this way with older versions of RHEL. The following script has been verified with Boost 1.65.0, but should work with many other versions. Information on downloading tar files and creating scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]].
+
Download the Boost tarfile (e.g. boost_1_65_0.tar.gz). Older versions of Boost can be found [https://www.boost.org/users/history/ here]. The following script has been verified with Boost 1.65.0, but should work with many other versions. Information on downloading tar files and creating scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]].
  
 
     echo "using mpi ;" > ~/user-config.jam
 
     echo "using mpi ;" > ~/user-config.jam
Line 116: Line 149:
 
     ./b2 -a -d+2 link=static install
 
     ./b2 -a -d+2 link=static install
 
     rm ~/user-config.jam
 
     rm ~/user-config.jam
 +
 +
Run this script from the top level Boost directory. This should configure, build and install the Boost libraries.
 +
 +
(In Boost 1.54, [http://www.boost.org/doc/libs/1_54_0/libs/log/doc/html/index.html Boost.Log] was added.  This uses some compiler capabilities not supported by the compilers that come with older versions RHEL, so [http://www.boost.org/doc/libs/1_54_0/libs/log/doc/html/index.html Boost.Log] is disabled.  Boost seems to work fine this way with older versions of RHEL.)
  
 
To build using the Intel compilers, substitute <code>--with-toolset=intel-linux</code> for <code>--with-toolset=gcc</code>. You may also run into problems with the name of the MPI wrapper for the C++ compiler. If it looks like configure is not finding <code>mpic++</code> then replace the first line in the above script with
 
To build using the Intel compilers, substitute <code>--with-toolset=intel-linux</code> for <code>--with-toolset=gcc</code>. You may also run into problems with the name of the MPI wrapper for the C++ compiler. If it looks like configure is not finding <code>mpic++</code> then replace the first line in the above script with
Line 123: Line 160:
 
Make sure you include the spaces around "<code>:</code>" and before "<code>;</code>".
 
Make sure you include the spaces around "<code>:</code>" and before "<code>;</code>".
  
Boost has a tendency to use cutting-edge features of the C++ compiler so it is a good idea to use a compiler version that was released at the same time as the Boost version you are working with. If you are having problems, you may have better luck moving to an earlier version of Boost. If the Boost build fails, you should delete the entire boost directory and start from scratch after making corrections to your build script. Restarting a failed Boost build does not appear to work in most instances.
+
Boost has a tendency to use cutting-edge features of the C++ compiler so it is a good idea to use a compiler version that was released at the same time as the Boost version you are working with. If you are having problems, you may have better luck moving to an earlier version of Boost. If the Boost build fails, you should delete the entire boost directory and start from scratch after making corrections to your build script. Restarting a failed Boost build does not appear to work in most instances. Additional tips on troubleshooting Boost builds can be found [[Troubleshooting GridPACK Builds#Boost | here]].
 +
 
  
 
== Building PETSc ==
 
== Building PETSc ==
 +
Recent versions of PETSc can be obtained at the [https://www.mcs.anl.gov/petsc/download/index.html PETSc download site]. GridPACK works with several recent versions of PETSc, including 3.4-3.8.  The configure line below will build PETSc along with several external libraries. Additional information on downloading tar files and creating scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]].
  
It is a good idea to include [http://crd-legacy.lbl.gov/~xiaoye/SuperLU/ SuperLU] in PETSc. GridPACK also requires ParMETIS and METIS, so you should include them as part of PETSc instead of building them separately (see below). GridPACK works with several recent versions of PETSc, including [https://www.mcs.anl.gov/petsc/download/index.html versions 3.4-3.8]. This example uses PETSc version 3.7.6 and was built with SuperLU, ParMETIS and METIS. The SuiteSparse libraries are also included in this build. Information on downloading tar files and creating scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]].
 
  
 
     ./configure \
 
     ./configure \
 
       PETSC_ARCH=arch-linux2-complex-opt \
 
       PETSC_ARCH=arch-linux2-complex-opt \
      --with-mpi=1 \
+
       --with-fortran-kernels=1 \
      --with-c++-support=1 \
 
      --with-c-support=0 \
 
      --with-fortran=0 \
 
      --with-scalar-type=complex \
 
       --with-fortran-kernels=generic \
 
 
       --download-superlu_dist \
 
       --download-superlu_dist \
 
       --download-superlu \
 
       --download-superlu \
Line 142: Line 175:
 
       --download-metis \
 
       --download-metis \
 
       --download-suitesparse \
 
       --download-suitesparse \
       --download-f2cblaslapack=1 \
+
       --download-fblaslapack \
 
       --with-clanguage=c++ \
 
       --with-clanguage=c++ \
 
       --with-shared-libraries=0 \
 
       --with-shared-libraries=0 \
 
       --with-x=0 \
 
       --with-x=0 \
      --with-mpirun=mpirun \
 
 
       --with-mpiexec=mpiexec \
 
       --with-mpiexec=mpiexec \
       --prefix="$PREFIX/petsc-3.7.6" \
+
       --prefix="$PREFIX/petsc-3.7.7" \
       --with-debugging=1
+
       --with-debugging=1 CFLAGS=-pthread CXXFLAGS=-pthread FFLAGS=-pthread
 +
 
 +
PETSc does not pay any attention to environment variables, so these need to be included directly in the configure script.
  
The use of the <tt>--prefix</tt> option can be a little tricky. Before configuring, make sure that the directory corresponding to the <tt>PETSC_ARCH</tt> variable has been removed from the directory (this is different from the install directory and should be directly under the top level PETSc directory). After running configure, the configure script will print out a message
+
The example applications in the GridPACK test suite require [http://crd-legacy.lbl.gov/~xiaoye/SuperLU/ SuperLU] in PETSc. GridPACK also requires ParMETIS and METIS, so you should include them as part of PETSc, if possible, instead of building them separately (see below). This example uses PETSc version 3.7.6 and was built with SuperLU, ParMETIS and METIS. The SuiteSparse libraries provide the best results for some of the applications and are also included in the build.
 +
 
 +
The use of the <tt>--prefix</tt> option can be a little tricky. If you have run configure before, make sure that the directory corresponding to the <tt>PETSC_ARCH</tt> variable has been removed from under the top level PETSc directory (this is different from the install directory). After running configure, the configure script will print out a message
  
 
     xxx=========================================================================xxx
 
     xxx=========================================================================xxx
 
       Configure stage complete. Now build PETSc libraries with (gnumake build):
 
       Configure stage complete. Now build PETSc libraries with (gnumake build):
       make PETSC_DIR=/home/palmer/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all
+
       make PETSC_DIR=/home/userA/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all
 
     xxx=========================================================================xxx
 
     xxx=========================================================================xxx
  
 
Users can cut-and-paste the make command and run it on the command prompt. If this command is successful, just keep following instructions from the PETSc build for the next step. The commands that need to be executed are
 
Users can cut-and-paste the make command and run it on the command prompt. If this command is successful, just keep following instructions from the PETSc build for the next step. The commands that need to be executed are
  
     make PETSC_DIR=/home/palmer/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all
+
     make PETSC_DIR=/home/userA/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all
     make PETSC_DIR=/home/palmer/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt install
+
     make PETSC_DIR=/home/userA/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt install
  
Once these commands are completed, there should be a petsc-3.7.6 directory under your <tt>PREFIX</tt> directory.
+
Once these commands are completed, there should be a petsc-3.7.6 directory under your <tt>PREFIX</tt> directory. If you run into problems, some additional tips can be found [[Troubleshooting GridPACK Builds#PETSc | here]].
  
 
== Building ParMETIS ==
 
== Building ParMETIS ==
Line 197: Line 233:
  
 
== Building Global Arrays ==
 
== Building Global Arrays ==
GA releases are available from [https://github.com/GlobalArrays/ga/releases here]. Information on downloading tar files and creating scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]]. Download GA into a directory <code>GA_HOME</code>. This configuration command should work for both GNU and Intel compilers if you have the CC, CXX, FC, F77 environment variables set.
+
GA releases are available from [https://github.com/GlobalArrays/ga/releases here]. Information on downloading tar files and creating scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]]. Download the GA tar file (ga-5.6.5.tar.gz) into a directory <code>GA_HOME</code>. This configuration command should work for both GNU and Intel compilers if you have the CC, CXX, FC, F77 environment variables set.
  
 
     cd $(GA_HOME)
 
     cd $(GA_HOME)
Line 207: Line 243:
  
 
== Building GridPACK ==
 
== Building GridPACK ==
It is a good idea to build GridPACK in a separate directory under the GridPACK source tree. This directory is referred to as <tt>$GRIDPACK</tt>. The example below assumes that a directory called <tt>build</tt> has been created under <tt>$GRIDPACK/src</tt> and that you have cd'd into this directory.
+
Download GridPACK from the [https://github.com/GridOPTICS/GridPACK/releases GridPACK release page] and untar the release using
 +
 
 +
    tar xvf gridpack-X.X.tar.gz
 +
 
 +
The top level GridPACK directory is denoted below by the variable <tt>$GRIDPACK</tt>.
 +
 
 +
It is a good idea to build GridPACK in a separate directory under the GridPACK source tree. The example below assumes that a directory called <tt>build</tt> has been created under <tt>$GRIDPACK/src</tt> and that you have cd'd into this directory. Make sure that the environment flags described at the top of this page have been set. You should be able to configure GridPACK with the following two commands. More information on GridPACK configuration options can be found [[How to Build GridPACK#Configuring GridPACK | here]].
  
 
   rm -rf CMake*
 
   rm -rf CMake*
Line 222: Line 264:
 
         -D CMAKE_BUILD_TYPE:STRING='RELWITHDEBINFO' \
 
         -D CMAKE_BUILD_TYPE:STRING='RELWITHDEBINFO' \
 
         -D MPIEXEC_MAX_NUMPROCS:STRING="2" \
 
         -D MPIEXEC_MAX_NUMPROCS:STRING="2" \
         -D CMAKE_VERBOSE_MAKEFILE:STRING=TRUE \
+
         -D CMAKE_VERBOSE_MAKEFILE:BOOL=TRUE \
 
         ..
 
         ..
  
Line 228: Line 270:
  
 
The <tt>rm -rf CMake*</tt> at the beginning of this script removes any CMake files that might be left over from a previous attempt at configuring GridPACK. If the GridPACK configuration fails for any reason, it is a good idea to remove these files before trying to configure again.
 
The <tt>rm -rf CMake*</tt> at the beginning of this script removes any CMake files that might be left over from a previous attempt at configuring GridPACK. If the GridPACK configuration fails for any reason, it is a good idea to remove these files before trying to configure again.
 +
 +
If you built ParMETIS separately from PETSc, you will need to add the following line to the GridPACK configuration script
 +
 +
        -D PARMETIS_DIR:STRING=$PREFIX \
  
 
If the <tt>cmake</tt> command is successful, GridPACK can be configured and installed by typing
 
If the <tt>cmake</tt> command is successful, GridPACK can be configured and installed by typing
Line 234: Line 280:
 
     make install
 
     make install
  
You can test the build by typing
+
If compilation is successful, the [[How to Build GridPACK#Running Tests|unit tests]] and/or [[How to Build GridPACK#Running_the_Powerflow_Example.28s.29|example applications]] can be run.
 
 
    make test
 
 
 
This will run numerous module and application tests within GridPACK. Some tests may fail. If you see errors due to timeouts or failure to match regular expressions, these are usually harmless.
 

Latest revision as of 17:19, 17 October 2018


These instructions have been superseded by those here. There still may be useful information, however, so they remain online. Read and follow and your own risk!

Preliminaries

The Red Hat Enterprise Linux (RHEL) operating system is found on many Linux workstations. The instructions and scripts on this page have been tested on a workstation using RHEL 7.5 with GNU 4.8.5 compilers. These scripts will probably work with other versions of RHEL and other versions of GNU with minor modifications, but they have been explicitly verified for this operating system. We also include some notes on building GridPACK using Intel compilers. The scripts described below have been verified for the Boost 1.65.0, PETSc 3.7.6 and Global arrays 5.6. They should work for other versions with little or no adjustments. This build is very similar to the build on a Linux cluster. The main difference is that in this build, all libraries and include files are installed in a single directory. For the cluster build, libraries are installed in separate directories. Which choice to use is mostly up to user preference.

This build assumes that all library and include files are located in a directory /home/userA/software/linux64 and that all library tar files have been downloaded into a directory /home/userA/software. These directory names are just an arbitrary example and users should select directories that reflect their own system when building external libraries. Both directories will need to be created by the user before attempting the rest of the build. The library directory can be specified for all builds by defining an environment variable PREFIX. If you are using the C-shell environment, then this variable can be set using

 setenv PREFIX /home/userA/software/linux64

If you are using the Bourne-shell, then this variable can be set using

 export PREFIX=/home/userA/software/linux64

Using either shell, you can verify that this variable is present and has been set correctly by typing

 env | grep PREFIX

You should see the line

 PREFIX=/home/userA/software/linux64

in the output. The PREFIX variable is used to direct the output of the different builds to the same place. Alternatively, you can install libraries in individual directories. See the documentation on building GridPACK on a Linux cluster for instructions on how to do this.

Before starting any builds, the following variables need to be defined in the environment

     setenv CC gcc
     setenv CFLAGS "-pthread"
     setenv CXX g++
     setenv CXXFLAGS "-pthread"
     setenv FC gfortran
     setenv FFLAGS "-pthread"

Again, this is for a C-shell environment. For the Bourne-shell, use

     export CC=gcc
     export CFLAGS="-pthread"
     export CXX=g++
     export CXXFLAGS="-pthread"
     export FC=gfortran
     export FFLAGS="-pthread"

If you are using Intel compilers, the environment settings are

     setenv CC icc
     setenv CFLAGS "-pthread"
     setenv CXX icpc
     setenv CXXFLAGS "-pthread"
     setenv FC ifort
     setenv FFLAGS "-pthread"

Building CMake

CMake is most likely already included on your system. However, if it is not, or the version is too old, you can build it using the commands below. You can test for CMake on your system by typing

 which cmake

You should see something like

 /usr/bin/cmake

If you get a message saying that command is not found, then CMake is not on your system and you will need to build it. You can check the CMake version by typing

 cmake -version

If the version is older than 2.8.12, you will need to compile a newer version.

The commands below have been used to build CMake version 2.8.12. You will need to start by downloading the CMake tar file (e.g. cmake-2.8.12.tar.gz) from the download site. Information on downloading tar files can be found here.

 tar xvzf cmake-2.8.12.tar.gz
 cd cmake-2.8.12
 ./bootstrap --prefix=$PREFIX
 make
 make test
 make install

Some tests may fail. In our tests, we saw

 The following tests FAILED:
          25 - FindPackageTest (Failed)
         230 - CMakeOnly.AllFindModules (Failed)
 Errors while running CTest

This does not seem to affect configuration of GridPACK.

Building MPI

For most workstations, it is likely that you will need to build MPI yourself. The script below has been used to build OpenMPI 1.10.2 using the GNU 4.8.5 compilers. You will need to download the openmpi-1.10.2.tar.gz file from the download page and then untar it into a local directory. Other versions of OpenMPI will probably work just as well and you should pick a recent version with the latest bug fixes. Information on downloading tar files and creating scripts on Linux can be found here.

   ./configure --enable-shared=no --enable-static=yes \
       --prefix="$PREFIX" CC=gcc CXX=g++ FC=gfortran
   make -j 4
   make check
   make install

You can configure Open MPI using Intel compilers with the command

   ./configure --enable-shared=no\
               --enable-static=yes\
               --prefix="$PREFIX" \
               CC=icc CXX=icpc F77=ifort FC=ifort

Note that the environment variables are declared as part of the configure command in this example, but they could also be set using the setenv command. After building MPI, you need to make sure that MPI is in your path by setting the following environment variables. These are usually set in your .cshrc or .bashrc files so that they are automatically in your environment whenever you log in. If you are using the c-shell, add the commands

   setenv PATH /home/userA/software/linux64/bin:${PATH}
   setenv MANPATH /home/userA/software/linux64/share/man:${MANPATH}
   setenv LD_LIBRARY_PATH /home/userA/software/linux64/lib:${LD_LIBRARY_PATH}

to your .cshrc file. If you are using the Bourne shell, use the commands

   export PATH=/home/userA/software/linux64/bin:${PATH}
   export MANPATH=/home/userA/software/linux64/share/man:${MANPATH}
   export LD_LIBRARY_PATH=/home/userA/software/linux64/lib:${LD_LIBRARY_PATH}

in your .bashrc file. If your environment is correctly set, you should have functions like mpicc and mpicxx in your path. You can test for this by typing

   which mpicc

in the Linux command prompt. You should see a path pointing to mpicc. In this example it would be

   /home/userA/software/linux64/bin/mpicc

Building Boost

Download the Boost tarfile (e.g. boost_1_65_0.tar.gz). Older versions of Boost can be found here. The following script has been verified with Boost 1.65.0, but should work with many other versions. Information on downloading tar files and creating scripts can be found here.

   echo "using mpi ;" > ~/user-config.jam
   sh ./bootstrap.sh \
       --prefix="$PREFIX" \
       --without-icu \
       --with-toolset=gcc \
       --without-libraries=python,log
   ./b2 -a -d+2 link=static stage
   ./b2 -a -d+2 link=static install
   rm ~/user-config.jam

Run this script from the top level Boost directory. This should configure, build and install the Boost libraries.

(In Boost 1.54, Boost.Log was added. This uses some compiler capabilities not supported by the compilers that come with older versions RHEL, so Boost.Log is disabled. Boost seems to work fine this way with older versions of RHEL.)

To build using the Intel compilers, substitute --with-toolset=intel-linux for --with-toolset=gcc. You may also run into problems with the name of the MPI wrapper for the C++ compiler. If it looks like configure is not finding mpic++ then replace the first line in the above script with

   echo "using mpi : /absolute/path/to/mpi/C++/wrapper ;" > ~/user-config.jam

Make sure you include the spaces around ":" and before ";".

Boost has a tendency to use cutting-edge features of the C++ compiler so it is a good idea to use a compiler version that was released at the same time as the Boost version you are working with. If you are having problems, you may have better luck moving to an earlier version of Boost. If the Boost build fails, you should delete the entire boost directory and start from scratch after making corrections to your build script. Restarting a failed Boost build does not appear to work in most instances. Additional tips on troubleshooting Boost builds can be found here.


Building PETSc

Recent versions of PETSc can be obtained at the PETSc download site. GridPACK works with several recent versions of PETSc, including 3.4-3.8. The configure line below will build PETSc along with several external libraries. Additional information on downloading tar files and creating scripts can be found here.


   ./configure \
      PETSC_ARCH=arch-linux2-complex-opt \
      --with-fortran-kernels=1 \
      --download-superlu_dist \
      --download-superlu \
      --download-parmetis \
      --download-metis \
      --download-suitesparse \
      --download-fblaslapack \
      --with-clanguage=c++ \
      --with-shared-libraries=0 \
      --with-x=0 \
      --with-mpiexec=mpiexec \
      --prefix="$PREFIX/petsc-3.7.7" \
      --with-debugging=1 CFLAGS=-pthread CXXFLAGS=-pthread FFLAGS=-pthread
 

PETSc does not pay any attention to environment variables, so these need to be included directly in the configure script.

The example applications in the GridPACK test suite require SuperLU in PETSc. GridPACK also requires ParMETIS and METIS, so you should include them as part of PETSc, if possible, instead of building them separately (see below). This example uses PETSc version 3.7.6 and was built with SuperLU, ParMETIS and METIS. The SuiteSparse libraries provide the best results for some of the applications and are also included in the build.

The use of the --prefix option can be a little tricky. If you have run configure before, make sure that the directory corresponding to the PETSC_ARCH variable has been removed from under the top level PETSc directory (this is different from the install directory). After running configure, the configure script will print out a message

   xxx=========================================================================xxx
     Configure stage complete. Now build PETSc libraries with (gnumake build):
      make PETSC_DIR=/home/userA/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all
   xxx=========================================================================xxx

Users can cut-and-paste the make command and run it on the command prompt. If this command is successful, just keep following instructions from the PETSc build for the next step. The commands that need to be executed are

   make PETSC_DIR=/home/userA/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt all
   make PETSC_DIR=/home/userA/software/petsc-3.7.6 PETSC_ARCH=arch-linux2-complex-opt install

Once these commands are completed, there should be a petsc-3.7.6 directory under your PREFIX directory. If you run into problems, some additional tips can be found here.

Building ParMETIS

It's easiest to include ParMETIS in the PETSc build. The GridPACK configuration will recognize and use ParMETIS from the PETSc installation. However, if you want to build ParMETIS separately, instructions are below. Information on downloading tar files and creating scripts can be found here.

In order to get ParMETIS 4.0 to compile with older GNU compilers, a warning option needs to be removed from one of the build system files. In the top ParMETIS source directory, execute the following command:

   sed -i.org -e 's/-Wno-unused-but-set-variable//g' metis/GKlib/GKlibSystem.cmake

Starting in the ParMETIS source directory, build and install METIS first:

   cd metis
   make config prefix="$PREFIX"
   make
   make install

then build and install ParMETIS:

   cd ..
   make config cc=mpicc cxx=mpicxx prefix="$PREFIX"
   make 
   make install

Do some tests to make sure it works:

   cd Graphs
   mpirun -np 4 ptest rotor.graph rotor.graph.xyz
   mpirun -np 4 ptest rotor.graph
   mpirun -np 8 ptest bricks.hex3d

The last test appears to hang but does not to otherwise cause any problems.

Building Global Arrays

GA releases are available from here. Information on downloading tar files and creating scripts can be found here. Download the GA tar file (ga-5.6.5.tar.gz) into a directory GA_HOME. This configuration command should work for both GNU and Intel compilers if you have the CC, CXX, FC, F77 environment variables set.

   cd $(GA_HOME)
   ./configure --with-mpi-ts --disable-f77 --without-blas --enable-cxx --enable-i4 --prefix="$PREFIX"
   make
   make install

This configure scripte builds Global Arrays with the two-sided messaging runtime. This runtime is suitable for small numbers of processors but has poor performance on larger systems. In this case, the progress ranks runtime should be used. See the page on building on a Linux cluster for more details.

Building GridPACK

Download GridPACK from the GridPACK release page and untar the release using

   tar xvf gridpack-X.X.tar.gz

The top level GridPACK directory is denoted below by the variable $GRIDPACK.

It is a good idea to build GridPACK in a separate directory under the GridPACK source tree. The example below assumes that a directory called build has been created under $GRIDPACK/src and that you have cd'd into this directory. Make sure that the environment flags described at the top of this page have been set. You should be able to configure GridPACK with the following two commands. More information on GridPACK configuration options can be found here.

 rm -rf CMake*
 cmake -Wdev \
       -D BOOST_ROOT:STRING=$PREFIX \
       -D PETSC_DIR:STRING=$PREFIX/petsc-3.7.6 \
       -D GA_DIR:STRING=$PREFIX \
       -D USE_PROGRESS_RANKS:BOOL=FALSE \
       -D MPI_CXX_COMPILER:STRING='mpicxx' \
       -D MPI_C_COMPILER:STRING='mpicc' \
       -D MPIEXEC:STRING='mpiexec' \
       -D CMAKE_INSTALL_PREFIX:PATH='$GRIDPACK/src/build/install' \
       -D CMAKE_BUILD_TYPE:STRING='RELWITHDEBINFO' \
       -D MPIEXEC_MAX_NUMPROCS:STRING="2" \
       -D CMAKE_VERBOSE_MAKEFILE:BOOL=TRUE \
       ..

Note the .. at the end of the cmake command. These two periods are pointing to the top level CMakeLists.txt file for the GridPACK build, which is located in the $GRIDPACK/src directory. If you are trying to configure GridPACK from a directory that is not directly underneath $GRIDPACK/src, this location in the cmake command will need to be modified.

The rm -rf CMake* at the beginning of this script removes any CMake files that might be left over from a previous attempt at configuring GridPACK. If the GridPACK configuration fails for any reason, it is a good idea to remove these files before trying to configure again.

If you built ParMETIS separately from PETSc, you will need to add the following line to the GridPACK configuration script

       -D PARMETIS_DIR:STRING=$PREFIX \

If the cmake command is successful, GridPACK can be configured and installed by typing

   make
   make install

If compilation is successful, the unit tests and/or example applications can be run.