Difference between revisions of "Building on CentOS6"
(28 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | <span style="color:red">''' | + | This build was done on a [https://www.virtualbox.org/ VirtualBox] virtual machine with a clean install of [https://www.centos.org/ CentOS 6.9] but it should work for Linux systems using the CentOS operating system. The compiler and library packages available for [https://www.centos.org/ CentOS 6.9] are quite old, so it is necessary to build GridPACK prerequisites from source and newer versions need to be avoided. This build was done with the a stock compiler (GNU 4.4), but, it in at least one other case, GridPACK was built using [https://www.softwarecollections.org/en/scls/rhscl/devtoolset-4/ Devtoolset-4] (GNU 5.3). <span style="color:red">'''You will need sudo privileges for this installation'''</span>. |
− | This | + | In this description, each of the prerequisite software libraries is stored in its own directory. This is the approach taken in the documentation on installing on a [[Building_on_a_Linux_Cluster | Linux cluster]]. An alternative is to have all libraries and include files in the same directories. This approach is used in the documentation for building on a [[Building_on_RHEL | workstation using Redhat Linux]]. The instructions below can be adapted to use this second approach with relatively little effort. Directories or architecture variables that should be set to reflect local user environments are colored <span style="color:red">red</span>. |
− | + | We recommend that users put the configuration commands described below into a script instead of typing them in on the command prompt. This minimizes the chance of errors and makes it easier to fix problems if a mistake is made. Some information on how to set up scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]], as well as information on downloading and unpacking tar files for the libraries described below. | |
− | |||
− | We recommend that users put the configuration commands described below into a script instead of typing them in on the command prompt. This minimizes the chance of errors and makes it easier to fix problems if a mistake is made. Some information on how to set up scripts can be found [[Software Required to Build GridPACK#Linux_Help | here]]. | ||
== Requisite System Software == | == Requisite System Software == | ||
Line 11: | Line 9: | ||
Some needed software can be installed with system packages: | Some needed software can be installed with system packages: | ||
− | + | sudo yum install environment-modules openmpi openmpi-devel gcc-c++ cmake bzip2-devel git | |
− | Before continuing, it will be necessary to either open another terminal and proceed there, or log out and log in. This makes the <code>module</code> command available for use. | + | This installs the <code>module</code> command. Before continuing, it will be necessary to either open another terminal and proceed there, or log out and log in. This makes the <code>module</code> command available for use. OpenMPI is installed as a "module". In order to use <code>mpiexec</code> and compiler wrappers, the correct module must be loaded using |
− | + | module load openmpi-x86_64 | |
− | < | + | This makes the MPI compiler wrappers and <code>mpiexec</code> available to the command line. <span style="color:red">'''The OpenMPI module must be loaded each time a terminal session is started.'''</span> |
If this does not work, then OpenMPI can be built by hand. To do this download an OpenMPI tarfile from the [https://www.open-mpi.org/software/ompi/v3.1/ download site] and unzip it into a directory in your home file area. Some additional information on how to do this can be found [[Software Required to Build GridPACK#Linux_Help | here]]. In this example, we store the MPI libraries in a local user directory, but if you have administrative privileges, you can install it in system directories. | If this does not work, then OpenMPI can be built by hand. To do this download an OpenMPI tarfile from the [https://www.open-mpi.org/software/ompi/v3.1/ download site] and unzip it into a directory in your home file area. Some additional information on how to do this can be found [[Software Required to Build GridPACK#Linux_Help | here]]. In this example, we store the MPI libraries in a local user directory, but if you have administrative privileges, you can install it in system directories. | ||
Line 25: | Line 23: | ||
./configure --enable-shared=no --enable-static=yes \ | ./configure --enable-shared=no --enable-static=yes \ | ||
− | --prefix="<span style="color:red">/home/admin/Desktop/software/openmpi- | + | --prefix="<span style="color:red">/home/admin/Desktop/software/openmpi-1.10.7/install</span>" CC=gcc CXX=g++ FC=gfortran |
make -j 4 | make -j 4 | ||
make install | make install | ||
Line 31: | Line 29: | ||
You will then need to add the MPI compiler wrappers, libraries and man pages to your environment. This can be done in the c-shell using the commands | You will then need to add the MPI compiler wrappers, libraries and man pages to your environment. This can be done in the c-shell using the commands | ||
− | setenv PATH <span style="color:red">/home/admin/Desktop/software/openmpi- | + | setenv PATH <span style="color:red">/home/admin/Desktop/software/openmpi-1.10.7/install</span>/bin:${PATH} |
− | setenv MANPATH <span style="color:red">/home/admin/Desktop/software/openmpi- | + | setenv MANPATH <span style="color:red">/home/admin/Desktop/software/openmpi-1.10.7/install</span>/share/man:${MANPATH} |
− | setenv LD_LIBRARY_PATH <span style="color:red">/home/admin/Desktop/software/openmpi- | + | setenv LD_LIBRARY_PATH <span style="color:red">/home/admin/Desktop/software/openmpi-1.10.7/install</span>/lib:${LD_LIBRARY_PATH} |
Alternatively, if you are using the Bourne shell, the corresponding commands are | Alternatively, if you are using the Bourne shell, the corresponding commands are | ||
− | export PATH=<span style="color:red">/home/admin/Desktop/software/openmpi- | + | export PATH=<span style="color:red">/home/admin/Desktop/software/openmpi-1.10.7/install</span>/bin:${PATH} |
− | export MANPATH=<span style="color:red">/home/admin/Desktop/software/openmpi- | + | export MANPATH=<span style="color:red">/home/admin/Desktop/software/openmpi-1.10.7/install</span>/share/man:${MANPATH} |
− | export LD_LIBRARY_PATH=<span style="color:red">/home/admin/Desktop/software/openmpi- | + | export LD_LIBRARY_PATH=<span style="color:red">/home/admin/Desktop/software/openmpi-1.10.7/install</span>/lib:${LD_LIBRARY_PATH} |
== Boost, version 1.54.0 == | == Boost, version 1.54.0 == | ||
− | Modern versions of Boost will probably not work here (especially Serialization and MPI). | + | Modern versions of Boost will probably not work here if you are using the default GNU compilers (4.4.7) on CentOS6 or RHEL6 (especially Serialization and MPI). An older version is needed and we recommend 1.54.0. If you are using a later version of the operating system or have access to more recent versions of the compilers, later versions of Boost will probably work. In general, Boost tends to track new features in the C++ standard fairly closely and using newer versions of Boost with older compilers may lead to problems. |
+ | |||
+ | You can download Boost version 1.54.0 [https://www.boost.org/users/history/version_1_54_0.html here]. If you are using the OpenMPI module, make sure to load it before running the following script. | ||
+ | |||
+ | echo "using mpi ;" > ~/user-config.jam | ||
+ | sh ./bootstrap.sh \ | ||
+ | --prefix="<span style="color:red">/home/admin/Desktop/software/boost_1_54_0</span>" \ | ||
+ | --without-icu \ | ||
+ | --with-toolset=gcc \ | ||
+ | --without-libraries=python | ||
+ | ./b2 -a -d+2 link=static stage | ||
+ | ./b2 -a -d+2 link=static install | ||
+ | rm ~/user-config.jam | ||
− | + | If you have any problems building Boost, it is generally a good idea to fix your build script, remove the current boost directory, and obtain a new directory from the Boost tar file. Trying to restart a failed build of Boost does not usually work well. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | . | ||
− | . | ||
== Global Arrays, version 5.6.5 == | == Global Arrays, version 5.6.5 == | ||
− | Modern versions of [https://github.com/GlobalArrays/ga/releases | + | Modern versions of Global Arrays work fine. In this example, we use GA 5.6.5, but others will work as well. Downloads of recent versions of Global Arrays are available [https://github.com/GlobalArrays/ga/releases here]. This script should be run in the top-level GA directory. Again, if you are using the OpenMPI module, be sure and load it before running this script. |
− | + | ./configure \ | |
− | + | --enable-cxx \ | |
− | + | --disable-f77 \ | |
− | ./configure \ | + | --enable-i4 \ |
− | + | --with-mpi \ | |
− | + | --with-mpi-ts \ | |
− | + | --enable-autodetect=yes \ | |
− | + | --prefix="<span style="color:red">/home/admin/Desktop/software/ga-5.6.5</span>" \ | |
− | + | --without-blas \ | |
− | + | --without-lapack \ | |
− | + | --without-scalapack \ | |
− | + | --enable-shared=no \ | |
− | + | --enable-static=yes \ | |
− | + | MPICC=mpicc MPICXX=mpicxx MPIF77=mpif90 \ | |
− | + | MPIEXEC=mpiexec MPIRUN=mpirun CFLAGS="-pthread" FCFLAGS="-pthread" CXXFLAGS="-pthread" | |
− | + | make | |
− | + | make install | |
− | |||
− | make | ||
− | |||
− | make install | ||
− | |||
== PETSc, version 3.6.4 == | == PETSc, version 3.6.4 == | ||
− | For some reason, more recent PETSc versions were not recognized by the GridPACK CMake configuration, possibly because of the | + | For some reason, more recent PETSc versions were not recognized by the GridPACK CMake configuration, possibly because of the older compiler. Version 3.6.4 (get it [https://www.mcs.anl.gov/petsc/download/index.html here]) appears to work. The script below should be run in the top level PETSc directory. The PETSc libraries and include files will end up in a directory under the top level directory with name of the <tt>PETSC_ARCH</tt> variable. This build assumes that the top level PETSc directory is located at <tt><span style="color:red">/home/admin/Desktop/software/petsc-3.6.4</span></tt>. You will need to modify this directory to reflect your own system. The <tt>PETSC_ARCH</tt> variable can be set to whatever string you want. |
+ | |||
+ | python ./config/configure.py \ | ||
+ | PETSC_ARCH=<span style="color:red">linux-gnu44-complex-opt</span> \ | ||
+ | --with-mpi=1 \ | ||
+ | --with-cc="mpicc" \ | ||
+ | --with-fc="mpif90" \ | ||
+ | --with-cxx="mpicxx" \ | ||
+ | --with-c++-support=1 \ | ||
+ | --with-c-support=0 \ | ||
+ | --with-fortran=1 \ | ||
+ | --with-scalar-type=complex \ | ||
+ | --with-precision=double \ | ||
+ | --with-clanguage=c++ \ | ||
+ | --with-fortran-kernels=generic \ | ||
+ | --with-valgrind=0 \ | ||
+ | --download-superlu_dist \ | ||
+ | --download-superlu \ | ||
+ | --download-parmetis \ | ||
+ | --download-metis \ | ||
+ | --download-suitesparse \ | ||
+ | --download-f2cblaslapack=1 \ | ||
+ | --download-mumps=0 \ | ||
+ | --download-scalapack=0 \ | ||
+ | --with-shared-libraries=0 \ | ||
+ | --with-x=0 \ | ||
+ | --with-mpirun=mpiexec \ | ||
+ | --with-mpiexec=mpiexec \ | ||
+ | --with-debugging=0 | ||
+ | |||
+ | After the configure script runs, PETSc will instruct you on what to do next. It should tell you to execute the following two commands. You can cut and paste these into the Linux command prompt. | ||
− | + | make PETSC_DIR= /home/admin/Desktop/software/petsc-3.6.4 PETSC_ARCH=linux-gnu48-complex-opt all | |
− | + | make PETSC_DIR= /home/admin/Desktop/software/petsc-3.6.4 PETSC_ARCH=linux-gnu48-complex-opt test | |
− | PETSC_DIR= | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | make PETSC_DIR=/ | ||
== GridPACK == | == GridPACK == | ||
− | < | + | [https://www.gridpack.org/wiki/index.php/Download_GridPACK Obtain the GridPACK release or development code] and put it in a convenient directory, like <code><span style="color:red">$HOME/gridpack/src</span></code>. |
− | + | ||
− | cmake -Wdev --debug-trycompile \ | + | To configure and build GridPACK, you should create a directory under <tt>GRIDPACK_HOME/src</tt> with a name such as <tt>build</tt>. (In this example, the name is <tt>build_ts2</tt>). This script assumes that the build directory is located under <tt>src</tt>. You can put it elsewhere, but note that the <tt>..</tt> at the end of GridPACK configure script below are pointing to <tt>GRIDPACK_HOME/src</tt>. If you locate the build directory someplace else, replace the <tt>..</tt> with the full path to the <tt>src</tt> directory. |
− | + | ||
− | + | rm -rf CMake* | |
− | + | cmake -Wdev --debug-trycompile \ | |
− | + | -D GA_DIR:PATH="<span style="color:red">/home/admin/Desktop/software/ga-5.6.5</span>" \ | |
− | + | -D BOOST_ROOT:PATH="<span style="color:red">/home/admin/Desktop/software/boost_1_54_0</span>" \ | |
− | + | -D USE_PROGRESS_RANKS:BOOL=OFF \ | |
− | + | -D PETSC_DIR:PATH="<span style="color:red">/home/admin/Desktop/software/petsc-3.6.4</span>" \ | |
− | + | -D PETSC_ARCH:STRING="<span style="color:red">linux-gnu44-complex-opt</span>" \ | |
− | + | -D MPI_CXX_COMPILER:STRING="mpicxx" \ | |
− | + | -D MPI_C_COMPILER:STRING="mpicc" \ | |
− | + | -D MPIEXEC:STRING="mpiexec" \ | |
− | + | -D GA_EXTRA_LIBS:STRING="-lrt" \ | |
− | + | -D USE_GLPK:BOOL=OFF \ | |
− | + | -D MPIEXEC_MAX_NUMPROCS:STRING="4" \ | |
− | + | -D GRIDPACK_TEST_TIMEOUT:STRING=10 \ | |
− | + | -D BUILD_SHARED_LIBS:BOOL=OFF \ | |
− | make | + | -D CMAKE_BUILD_TYPE:STRING=Debug \ |
− | + | -D CMAKE_VERBOSE_MAKEFILE:BOOL=TRUE \ | |
+ | -D CMAKE_INSTALL_PREFIX:PATH="<span style="color:red">/home/admin/Desktop/gridpack/GridPACK3.2/src/build_ts2/install</span>" \ | ||
+ | CFLAGS="-pthread" FCFLAGS="-pthread" CXXFLAGS="-pthread" \ | ||
+ | .. | ||
+ | make | ||
+ | |||
+ | If compilation is successful, the [[How to Build GridPACK#Running Tests|unit tests]] and/or [[How to Build GridPACK#Running_the_Powerflow_Example.28s.29|example applications]] can be run. |
Latest revision as of 14:28, 31 October 2020
This build was done on a VirtualBox virtual machine with a clean install of CentOS 6.9 but it should work for Linux systems using the CentOS operating system. The compiler and library packages available for CentOS 6.9 are quite old, so it is necessary to build GridPACK prerequisites from source and newer versions need to be avoided. This build was done with the a stock compiler (GNU 4.4), but, it in at least one other case, GridPACK was built using Devtoolset-4 (GNU 5.3). You will need sudo privileges for this installation.
In this description, each of the prerequisite software libraries is stored in its own directory. This is the approach taken in the documentation on installing on a Linux cluster. An alternative is to have all libraries and include files in the same directories. This approach is used in the documentation for building on a workstation using Redhat Linux. The instructions below can be adapted to use this second approach with relatively little effort. Directories or architecture variables that should be set to reflect local user environments are colored red.
We recommend that users put the configuration commands described below into a script instead of typing them in on the command prompt. This minimizes the chance of errors and makes it easier to fix problems if a mistake is made. Some information on how to set up scripts can be found here, as well as information on downloading and unpacking tar files for the libraries described below.
Contents
Requisite System Software
Some needed software can be installed with system packages:
sudo yum install environment-modules openmpi openmpi-devel gcc-c++ cmake bzip2-devel git
This installs the module
command. Before continuing, it will be necessary to either open another terminal and proceed there, or log out and log in. This makes the module
command available for use. OpenMPI is installed as a "module". In order to use mpiexec
and compiler wrappers, the correct module must be loaded using
module load openmpi-x86_64
This makes the MPI compiler wrappers and mpiexec
available to the command line. The OpenMPI module must be loaded each time a terminal session is started.
If this does not work, then OpenMPI can be built by hand. To do this download an OpenMPI tarfile from the download site and unzip it into a directory in your home file area. Some additional information on how to do this can be found here. In this example, we store the MPI libraries in a local user directory, but if you have administrative privileges, you can install it in system directories.
After cd'ing into the OpenMPI directory, configure MPI using the script
./configure --enable-shared=no --enable-static=yes \
--prefix="/home/admin/Desktop/software/openmpi-1.10.7/install" CC=gcc CXX=g++ FC=gfortran
make -j 4
make install
You will then need to add the MPI compiler wrappers, libraries and man pages to your environment. This can be done in the c-shell using the commands
setenv PATH /home/admin/Desktop/software/openmpi-1.10.7/install/bin:${PATH} setenv MANPATH /home/admin/Desktop/software/openmpi-1.10.7/install/share/man:${MANPATH} setenv LD_LIBRARY_PATH /home/admin/Desktop/software/openmpi-1.10.7/install/lib:${LD_LIBRARY_PATH}
Alternatively, if you are using the Bourne shell, the corresponding commands are
export PATH=/home/admin/Desktop/software/openmpi-1.10.7/install/bin:${PATH} export MANPATH=/home/admin/Desktop/software/openmpi-1.10.7/install/share/man:${MANPATH} export LD_LIBRARY_PATH=/home/admin/Desktop/software/openmpi-1.10.7/install/lib:${LD_LIBRARY_PATH}
Boost, version 1.54.0
Modern versions of Boost will probably not work here if you are using the default GNU compilers (4.4.7) on CentOS6 or RHEL6 (especially Serialization and MPI). An older version is needed and we recommend 1.54.0. If you are using a later version of the operating system or have access to more recent versions of the compilers, later versions of Boost will probably work. In general, Boost tends to track new features in the C++ standard fairly closely and using newer versions of Boost with older compilers may lead to problems.
You can download Boost version 1.54.0 here. If you are using the OpenMPI module, make sure to load it before running the following script.
echo "using mpi ;" > ~/user-config.jam
sh ./bootstrap.sh \
--prefix="/home/admin/Desktop/software/boost_1_54_0" \
--without-icu \
--with-toolset=gcc \
--without-libraries=python
./b2 -a -d+2 link=static stage
./b2 -a -d+2 link=static install
rm ~/user-config.jam
If you have any problems building Boost, it is generally a good idea to fix your build script, remove the current boost directory, and obtain a new directory from the Boost tar file. Trying to restart a failed build of Boost does not usually work well.
Global Arrays, version 5.6.5
Modern versions of Global Arrays work fine. In this example, we use GA 5.6.5, but others will work as well. Downloads of recent versions of Global Arrays are available here. This script should be run in the top-level GA directory. Again, if you are using the OpenMPI module, be sure and load it before running this script.
./configure \
--enable-cxx \
--disable-f77 \
--enable-i4 \
--with-mpi \
--with-mpi-ts \
--enable-autodetect=yes \
--prefix="/home/admin/Desktop/software/ga-5.6.5" \
--without-blas \
--without-lapack \
--without-scalapack \
--enable-shared=no \
--enable-static=yes \
MPICC=mpicc MPICXX=mpicxx MPIF77=mpif90 \
MPIEXEC=mpiexec MPIRUN=mpirun CFLAGS="-pthread" FCFLAGS="-pthread" CXXFLAGS="-pthread"
make
make install
PETSc, version 3.6.4
For some reason, more recent PETSc versions were not recognized by the GridPACK CMake configuration, possibly because of the older compiler. Version 3.6.4 (get it here) appears to work. The script below should be run in the top level PETSc directory. The PETSc libraries and include files will end up in a directory under the top level directory with name of the PETSC_ARCH variable. This build assumes that the top level PETSc directory is located at /home/admin/Desktop/software/petsc-3.6.4. You will need to modify this directory to reflect your own system. The PETSC_ARCH variable can be set to whatever string you want.
python ./config/configure.py \
PETSC_ARCH=linux-gnu44-complex-opt \
--with-mpi=1 \
--with-cc="mpicc" \
--with-fc="mpif90" \
--with-cxx="mpicxx" \
--with-c++-support=1 \
--with-c-support=0 \
--with-fortran=1 \
--with-scalar-type=complex \
--with-precision=double \
--with-clanguage=c++ \
--with-fortran-kernels=generic \
--with-valgrind=0 \
--download-superlu_dist \
--download-superlu \
--download-parmetis \
--download-metis \
--download-suitesparse \
--download-f2cblaslapack=1 \
--download-mumps=0 \
--download-scalapack=0 \
--with-shared-libraries=0 \
--with-x=0 \
--with-mpirun=mpiexec \
--with-mpiexec=mpiexec \
--with-debugging=0
After the configure script runs, PETSc will instruct you on what to do next. It should tell you to execute the following two commands. You can cut and paste these into the Linux command prompt.
make PETSC_DIR= /home/admin/Desktop/software/petsc-3.6.4 PETSC_ARCH=linux-gnu48-complex-opt all make PETSC_DIR= /home/admin/Desktop/software/petsc-3.6.4 PETSC_ARCH=linux-gnu48-complex-opt test
GridPACK
Obtain the GridPACK release or development code and put it in a convenient directory, like $HOME/gridpack/src
.
To configure and build GridPACK, you should create a directory under GRIDPACK_HOME/src with a name such as build. (In this example, the name is build_ts2). This script assumes that the build directory is located under src. You can put it elsewhere, but note that the .. at the end of GridPACK configure script below are pointing to GRIDPACK_HOME/src. If you locate the build directory someplace else, replace the .. with the full path to the src directory.
rm -rf CMake* cmake -Wdev --debug-trycompile \ -D GA_DIR:PATH="/home/admin/Desktop/software/ga-5.6.5" \ -D BOOST_ROOT:PATH="/home/admin/Desktop/software/boost_1_54_0" \ -D USE_PROGRESS_RANKS:BOOL=OFF \ -D PETSC_DIR:PATH="/home/admin/Desktop/software/petsc-3.6.4" \ -D PETSC_ARCH:STRING="linux-gnu44-complex-opt" \ -D MPI_CXX_COMPILER:STRING="mpicxx" \ -D MPI_C_COMPILER:STRING="mpicc" \ -D MPIEXEC:STRING="mpiexec" \ -D GA_EXTRA_LIBS:STRING="-lrt" \ -D USE_GLPK:BOOL=OFF \ -D MPIEXEC_MAX_NUMPROCS:STRING="4" \ -D GRIDPACK_TEST_TIMEOUT:STRING=10 \ -D BUILD_SHARED_LIBS:BOOL=OFF \ -D CMAKE_BUILD_TYPE:STRING=Debug \ -D CMAKE_VERBOSE_MAKEFILE:BOOL=TRUE \ -D CMAKE_INSTALL_PREFIX:PATH="/home/admin/Desktop/gridpack/GridPACK3.2/src/build_ts2/install" \ CFLAGS="-pthread" FCFLAGS="-pthread" CXXFLAGS="-pthread" \ .. make
If compilation is successful, the unit tests and/or example applications can be run.