Skip to main content

Install Applications and Libraries in the Home Directory

Please let us know your thoughts about installing utilities and libraries via email: sys-help@loni.org

1. Introduction

Typically, a user has the permission to compile and install extra libraries and applications in his/her home, work or project directory, where there is enough space to store such tools and libraries. However, at first please check our blocklist within this webpage. If a library or an application on the blocklist was installed without permission, it may be harmful to the HPC envrionment and the user will be warned or even have his/her account disabled.

Please direct questions to sys-help@loni.org if you are unsure about whether an installation is appropriate or need help for the installation.

Back to top

2. Blocklist

2.1 glibc

We do not allow users to install any version of glibc in their own directory on our HPC supercomputer clusters, because glibc is integrated into the operating system and the installation will cause security issues. Therefore, we do not support using a self-installed version of glibc. If you need to link your package to glibc, please link it to the default system-wide installation of glibc; if you have source code, please recompile it using the system-wide installation of glibc; if only a binary is availabe, please provide us what the code is, where it comes from and if it is licensed for the trouble-shooting, or ask the developer to compile the code with the same glibc version as our system-wide installation of glibc.

Back to top

3. Application

3.1 FFTW

The latest official version of FFTW can be obtained at www.fftw.org. If you are still considering to use FFTW 2.x, please note that FFTW 2.x was last updated in 1999 and it is obsolete. So please install FFTW 3.x.

To install FFTW3, copy the FFTW3 tarball to the home directory and unpack it. Then rename the new directory to tmp and change to it.

$ tar -xvzf fftw-3.3.4.tar.gz
$ mv fftw-3.3.4 tmp
$ cd tmp

Next, in the configure step, specify the install destination with the --prefix option. Meanwhile, the customization of FFTW should be well checked during the configuration step (e.g. whether a serial or paralell version should be installed). For the full description of the FFTW, please carefully read the introduction at www.fftw.org/fftw2_doc/fftw.html. For example, the following configure program will install a parallel single-precision version of FFTW with shared libraries:

$ ./configure --prefix=$HOME/fftw-3.3.4 --enable-float --enable-shared --enable-mpi -mpicc --enable-sse2
      

Here the --enable-float flag produces a single-precision version of FFTW; --enable-shared creates shared libraries; --enable-mpi flag enables compliation and installation of FFTW MPI library with a specified compiler.The version of MPI compiler can be obtained with which command (which mpicc). The last --enable-sse2 flag is to compile FFTW to support SIMD (single instruction multiple data) instructions. This is asked by some self-installed software such as Gromacs for better performance when linking your self-installed FFTW.

After the configuration, use GNU make and make install to finish the installation.

$ make
$ make install
      

To testing FFTW for correctness, make check can be used to put the FFTW test programs through their paces. For instance, the following command line combines the error message (if any) with standard output, and then dumps the output to a log file named check.log. Failures are not necessarily problems as they might be caused by missing functionality, but a careful look at any reported discrepancies is required.

$ make check 2>&1 | tee check.log
      

Alternatively, examine the binaries in the fftw-3.3.4/bin with ldd command, as well as check if the library files in the fftw-3.3.4/lib has suitable extensions (.so for shared library; .a for static library).

If problems occured during configuration or compilation, use make distclean before trying again; this ensures that you don't have any stale files left over from previous compilation attempts.

Finally, if the installation looks good, add relative paths to the shell initialization script under home directory. For a bash shell, after openning the .bash_profile , type:

#added path to fftw
export PATH=$HOME/fftw-3.3.4/bin:$PATH
export LD_LIBRARY_PATH=$HOME/fftw-3.3.4/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$HOME/fftw-3.3.4/lib:$LIBRARY_PATH
export PKG_CONFIG_PATH=$HOME/fftw-3.3.4/lib/pkgconfig:$PKG_CONFIG_PATH
#end
      

Once finished, remember source this file or relogin to the cluster to take effect the change. The tmp directory may be deleted.

Back to top

3.2 R

Introduction

The 2016 or later version of R has removed the versions of several compression libraries requried by the installastion that used to be included, such as zlib, bzip2xz, curl and pcre. The R package assumes those libraries are all up-to-date in the operating system. Unfortunately, on the SuperMike2 which is still running RedHat 6.X, it is a serious problem as the required libraries are not up-to-date. When configuring a R package released in 2016 or later, it will give error message like this:

 checking if zlib version >= 1.2.5... no
 checking whether zlib support suffices... configure: error: zlib library and headers are required
      

From the HPC administrator's point of view, we are unwilling to update the support libraries system-widely very often, because the defects (bugs) of the new libraries may be harmful to other existing software. But it is possible for the user to complile and install all support libraries (zlib, bzip2xz etc.) in the home directory without system-wide intervention. To do so, please refer to an article written by Prof. Johnson at here.

Prerequisite lib building
Note: this section is for SuperMike2 only. As of June 2021, all other supercomputer clusters at LSU & LONI HPC (QB2, QB3, SMIC etc.) have installed RedHat 7.X or newer, so there is no need to re-install compression libraries on those clusters.

GCC 4.9 was used to compile the following libs. For concise, all libs are installed in one directory, say $HOME/packages.

Build zlib (must >= 1.2.5)

wget http://zlib.net/zlib-1.2.11.tar.gz
tar xzvf zlib-1.2.11.tar.gz
cd zlib-1.2.11
./configure --prefix=$HOME/packages
make
make install
cd..
	  

In .bash_profile add:

export PATH=$HOME/packages/bin:$PATH
export LD_LIBRARY_PATH=$HOME/packages/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$HOME/packages/lib:$LIBRARY_PATH
export CPATH=$HOME/packages/include:$CPATH
	  

Build bzip2 (>= 1.0.6)

wget http://www.hpc.lsu.edu/training/weekly-materials/Downloads/bzip2-1.0.6.tar.gz
tar xzvf bzip2-1.0.6.tar.gz
cd bzip2-1.0.6
make -f Makefile-libbz2_so
make clean
	  

Note: Insert "-fPIC" as a CFLAG in the Makefile to avoid an error in the R make step. Then use the following command to finish building bzip2.

make
make -n install PREFIX=$HOME/packages
make install PREFIX=$HOME/packages
cd ..
	  

Build lzma (any version), try not to install the separate liblzma, but rather get the package known as xz at http://tukaani.org/xz .


tar xzvf xz-5.2.2.tar.gz
cd xz-5.2.2
./configure --prefix=$HOME/packages
make
make install
cd ..
	  

Build pcre (>=8.10 and has utf-8 support)

wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.38.tar.gz
tar xzvf pcre-8.38.tar.gz
cd pcre-8.38
./configure --enable-utf8 --prefix=$HOME/packages
make
make install
cd ..
	  

Build curl (>=7.28.0) Download curl 7.55.1 at https://curl.haxx.se/download/

Note: Any values in the environment variable PYTHONPATH should be removed by the command unset PYTHONPATH before installing curl.


tar xvzf curl-7.55.1.tar.gz
cd curl-7.55.1
./configure --prefix=$HOME/packages
make -j3
make install
cd ..

Installation

If Intel compiler is used, please note if the default compiler on the cluster is Intel 14, it does not support _Alignof in C11, which will fatally terminate the make process. Therefore, a newer version of Intel complier is required. Intel 15.0.0 on SuperMIC and Intel 16.0.3 on QB2 have been used to compile R respectively, and the tests of installation were successful. Before the installation of R, unload the default Intel compiler and load the newer one with module:

$ module unload intel/14.0.2
$ module load intel/16.0.3
      

Note: Any values in the environment variable R_LIBS_USER should be removed by the command unset R_LIBS_USER before installing R.

Accordign to an article at R-bloggers and other online resources, it is recommended to use Math Kernel Library (MKL) optimized for Intel processors, with performance far superior to traditional libraries, as well as to have the capabilities of multithreading.

To install R, copy the R tarball to the home directory and unpack it. Then rename the new directory to tmp and change to it.

$ tar -xvzf R-3.5.1.tar.gz
$ mv R-3.5.1 tmp
$ cd tmp
      

Specify the install destination (the home directory), the use of Intel compiler and MKL in the configuration:

fast="-ip -O3 -opt-mem-layout-trans=3 -xHost -mavx -fp-model precise"
./configure --prefix="$HOME/r-3.5.1" CC=icc CFLAGS="$fast -wd188" CXX=icpc \ CXXFLAGS="$fast" FC=ifort FCFLAGS="$fast" F77=ifort FFLAGS="$fast" \
 --with-blas='-mkl=parallel' --with-lapack
      

If using Intel 18 compiler or later, replace "-opt-mem-layout-trans=3" with "-qopt-mem-layout-trans=3"

The last two options is for MKL support. The --with-blas option compiles a special version of BLAS library supported by MKL. If more precise control than -mkl=parallel is desired, please read the MKL User's Guide or see the website of the Intel MKL Link Line Advisor. Some good examples of detailed --with-blas flag options can be found here, here, here and here. The --with-lapack option compiles a special version of LAPACK library supported by MKL.

Note: --enable-R-shlib is necessary to create libR.so, which is required for building RStudio and several other R packages such as svcm

Check if R_LIBS_USER is used, it should be unset before the installation.

Then use GNU make and make install to finish the installation.

# compiling in parallel. Where -j is the # of cores on the node.
$ make -j 20
$ make install
      

After make install, to check if MKL library was dynamically linked in the R, you may use ldd command to verify:

$ ldd $HOME/r-3.5.1/lib64/R/bin/exec/R
linux-vdso.so.1 =>  (0x00007fffd87ff000)
	libmkl_intel_lp64.so => /usr/local/compilers/Intel/cluster_studio_xe_2013.1.046/composer_xe_2013_sp1.2.144/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002ba734fe5000)
......
      

In the R test, make check can be used to put the R test programs through their paces. Please note since R-3.6.0, R adds a new test package called "exSexpr" that requires LaTex. Since LaTex is not available on our clusters, the test will give an error related to latex creating the PDF package manuals, making the whole test fail. So if needed, make check may be replaced by make test-Examples, which runs all the examples from the help files (*.Rd) of all core packages(base, ctest, ..., ts).

$ make check
# for R-3.6.0 or later:
$ cd tests
$ make test-Examples
      

Finally, if the installation looks good, add relative paths to the shell initialization script under home directory. For a bash shell, after openning the .bash_profile , type:

#added path to R
export PATH=$HOME/r-3.5.1/bin:$PATH
#end
      

Once finished, remember source this file or relogin to the cluster to take effect the change. The tmp directory may be deleted once the installation has done.

Back to top

3.3 GCC

The GNU Compiler Collection (GCC), likely best known by its gcc C compiler, is already installed on all of our supercomputer clusters. The detail of system-wide installed gcc can be found here.

If you need your own GCC, since you don't have root privilege, you need to install it from the source code rather than yum install. So first of all, please download the source code. Simply google "gcc source download", and download the tarball (e.g. gcc-5.5.0.tar.gz) from the mirror website. Copy the gcc-5.5.0.tar.gz tarball to the home or project directory and unpack it. Then rename the new directory to tmp and change to it.

$ tar -xvzf gcc-5.5.0.tar.gz
$ mv gcc-5.5.0.tar.gz tmp
$ cd tmp

Next, run the gcc./contrib/download_prerequisites script in the GCC source directory. That will download the support libraries and create symlinks, causing them to be built automatically as part of the GCC build process.

$./contrib/download_prerequisites

The GCC developers highly recommend that GCC should be built into a separate directory from the sources which does not reside within the source tree. A major benefit of running srcdir/configure from outside the source directory (instead of running ./configure) is that the source directory will not be modified in any way, so if your build fails or you want to re-configure and build again, you simply delete everything in the build directory and start again. Therefore, before the configuration, exit the source directory first, make a build directory "gcc-5.5.0" and then change to it.

$ cd ..
$ mkdir gcc-5.5.0
$ cd gcc-5.5.0

Next, in the configure step, specify the install destination with the --prefix option and other flags

$PWD/../tmp/configure --prefix=$HOME/gcc-5.5.0 --enable-languages=c,c++,fortran,go CC=gcc \ CXX=g++ --disable-multilib

Here --enable-languages specifies that only a particular subset of compilers and their runtime libraries should be built. If you do not pass this flag, or specify the option default, then the default languages available in the gcc sub-tree will be configured. CC=gcc and CXX=g++ ask using system-widely installed GCC compiler to complie the gcc-5.5.0 source code. In this example, gcc 4.9 is used. --disable-multilib is applied because a 64-bit-only GCC compiler will be built. If this flag was not specified, A error message "configure: error: I suspect your system does not have 32-bit developement libraries (libc and headers). If you have them, rerun configure with --enable-multilib. If you do not have them, and want to build a 64-bit-only compiler, rerun configure with --disable-multilib." will appear. And --enable-multilib is not a good choice. Build will fail with "fatal error: gnu/stubs-32.h: No such file or directory", because our system does not have 32-bit developement libraries (libc and headers), and we do not allow any users to install their version of glibc in their own directory on any HPC clusters.

After the configuration, use make and make install to finish the installation. The make command could take several hours.

$ make
$ make install

To test GCC for correctness, the following command line gives you both version and programs invoked by the newly installed compiler.

$ ./bin/gcc -v
Using built-in specs.
COLLECT_GCC=./bin/gcc
COLLECT_LTO_WRAPPER=/worka/project/ychen64/gcc-5.5.0/bin/../libexec/gcc/x86_64-unknown-linux-gnu/5.5.0/lto-wrapper
Target: x86_64-unknown-linux-gnu
Configured with: /project/ychen64/gcc-5.5.0/../tmp/configure --prefix=/project/ychen64/gcc-5.5.0 --enable-languages=c,c++,fortran,go CC=gcc CXX=g++ --disable-multilib
Thread model: posix
gcc version 5.5.0 (GCC)

Also, you may create a Hello World to see if it links and compiles properly. Create a file named test.c with the following in it:

#include <stdio.h>

int main() {
    printf("Hello, world!\n");
    return 0;
}

Then compile test.c and run it.

$ ./bin/gcc test.c -o test
$ ./test

Finally, if the installation looks good, add relative paths to the shell initialization script under home directory. For a bash shell, after openning the .bash_profile, type:

# added path to gcc
export PATH=~/gcc-5.5.0/bin:$PATH
export LD_LIBRARY_PATH=~/gcc-5.5.0/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=~/gcc-5.5.0/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=~/gcc-5.5.0/lib/gcc/x86_64-unknown-linux-gnu/5.5.0:$LD_LIBRARY_PATH
export LIBRARY_PATH=~/gcc-5.5.0/lib:$LIBRARY_PATH
export LIBRARY_PATH=~/gcc-5.5.0/lib64:$LIBRARY_PATH
export MANPATH=~/gcc-5.5.0/share/man:$MANPATH

Once finished, remember source this file or relogin to the cluster to take effect the change. The tmp directory may be deleted.

Back to top

3.4 Python

If a specific version of Python is needed for building other software (e.g. qiime2) or a specific python module in the Conda Virtual Environment, we recommend Miniconda, which is a mini version of Anaconda that includes only conda and its dependencies. It does not require administrator permissions to install.

The latest Miniconda can be downloaded as Miniconda2 or Miniconda3. Miniconda2 is Python 2 based and Miniconda3 is Python 3 based. Note that the choice of which Miniconda is installed only affects the root environment. Regardless of which version of Miniconda you install, you can still install both Python 2.x and Python 3.x environments. The other difference is that the Python 3 version of Miniconda will default to Python 3 when creating new environments and building packages. So for instance, the behavior of

$ conda create -n myenv python
      

will be to install Python 2.7 with the Python 2 Miniconda and to install Python 3.6 with the Python 3 Miniconda. You can override the default by explicitly setting python=2 or python=3 in the conda create command when creating the Conda Virtual Environment. It also determines the default value of CONDA_PY when using conda build.

First, choose one of the following commands to download the installer script:

$ wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh
$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
      

Next, load system-widely installed Python 3.5 or Python 3.6 to install Miniconda. Python 2 on the cluster doesn't have some required packages for building Miniconda, so only Python 3 can work. For example, on the SuperMike2:

$ module load python/3.6.4-anaconda
      

Unset environmental variables PYTHONPATH and PYTHONUSERBASE before the Miniconda installation:

$ unset PYTHONPATH
$ unset PYTHONUSERBASE
      

Install Miniconda2 is straightforward:

$ bash Miniconda2-latest-Linux-x86_64.sh
      

or for Miniconda3

$ bash Miniconda3-latest-Linux-x86_64.sh
      

Use conda list command to verify Miniconda has been installed successfully. If see "ImportError: No module named site", type unset PYTHONHOME will fix the issue. Once the Miniconda has been installed, the system-wide installed Python 3 doesn't have to, or shouldn't be loaded anymore.

If you already have a Miniconda or Anaconda installed, and you want to upgrade it, you should not use the installer script once more. Instead, just use conda update. For instance:

$ conda update conda
      
Back to top

4. Library

4.1 R

Installation

If a system-wide installed R is used, the default R library path is /home/packages/r/3.x.x/INTEL-14.0.2/lib64/R/library, which is not writable if a user do not have root privilege. To install a R package, the environment variable R_LIBS_USER needs to be pointed to a desired location in your own directory. For example, when using bash shell you would issue a command similar to the following:

$ export R_LIBS_USER=$HOME/packages/R/libraries
      

The above command line can be added to the default shell initialization environment by pasting the line to the script such as .bash_profile under your home directory. If the R is installed in your home directory, the above step is unnecessary but can be considered if you wish to have two directories for the default and new libraries respectively.

Type R to enter the console, then run R commands there.

$ R
(introduction section)
...
> 

To see libraries that R currently searching, use:

> .libPaths ()
      

Use install.packages("(package name)") function (double quotation is mandatory) to install package. For example, to install the package glmnet:

> install.packages ("glmnet",repos="http://cran.r-project.org")
      

Then select a CRAN mirror (e.g. 144 USA (TX)) to install the package glmnet.

Load, update and remove packages

When in the R console, the library() function checks all of the previously installed packages. the library()function can also be used to load a specified installed package:

> library ("glmnet")
      

Packages can be updated, as well as removed in the R session:

> update.packages ("glmnet")
#
> remove.packages ("glmnet")
      
Back to top

4.2 Perl

Installation

Perl modules can be treated as libraries. Each module comes with reusable codes. The Comprehensive Perl Archive Network (CPAN) has tons of modlues developed by the Perl community at www.cpan.org. Therefore, before settng out to write something serious, check CPAN first.

Perl modules can be installed manually, which includes:1. download the tarball and extract the content; 2. create a Makefile; 3. make, make test and make install. And "make Makefile.Pl prefix=" is required for users who don't have root privilege.

Alternatively, the module installation can be done with either cpanm or cpan.

cpanm is short for Cpanminus which supports installing packages in to the your home directory.

curl -L http://cpanmin.us | perl - App::cpanminus
      

The command line above will try first install in system perl directory. Since you don't have write permission, cpanm will be installed in local directory, and the default is ~/perl5/

Then we can install perl modules for user in their local directory, still the default is ~/perl5/. Below are the examples for installing Statistics::TTest, Math::CDF, Parallel::ForkManager as the dependencies of diffreps (outputs are omitted):

[mforoo1@mike5 ~]$ export PATH=~/perl5/bin/:$PATH
[mforoo1@mike5 ~]$ cpanm Statistics::TTest
[mforoo1@mike5 ~]$ cpanm Math::CDF
[mforoo1@mike5 ~]$ cpanm Parallel::ForkManager
      

On the other hand, cpan provides a console to search, download and install modules and their dependencies automatically. Note the module cpan is not provided by the default Perl (5.10.1) on our supercomuter clusters, so a newer version of Perl has to be added by softenv or module to use cpan.

Once the right version of Perl (check our current versions and availability) has been loaded into the user environment, to install a module, for example, to install the module Inline you may type:

$ perl -MCPAN -e 'install Inline'
      

If this is your first time to use module cpan to install other Perl modules, after typing the command line above, it will appear an interactive dialogue:

Would you like to configure as much as possible automatically?
      

It is expecting you to type yes or no. Type yes to have the configuration automatically unless you are really knowing how to configure. Then the next question is:

To install modules, you need to configure a local Perl library directory or escalate your privileges. What approach do you want?  (Choose 'local::lib', 'sudo' or 'manual')
      

Type local::lib as you need to configure a local Perl library without root privilege. Then the next question is:

Would you like me to automatically choose some CPAN mirror sites for you?
      

Type yes. The last question is:

Would you like me to append that to /home/ychen64/.bashrc now?
      

Again, yes. Then several lines aiming to change certain environment variables will be written to the .bashrc in your home directory to tell Perl where to install and load modules in the local Perl library. Remember to restart your command line shell (e.g. source your .bashrc if using bash ) if you wish to install another Perl module or load the newly installed module immediately, because your user environment hasn't been changed yet.

After the first use, no more setup is required in your next Perl module installations with cpan. For example, after installed the module Inline, the module Inline::Files can be installed by simply typing:

$ perl -MCPAN -e 'install Inline::Files'
      

If everything goes well, it will be installed automatically in seconds. If you got permission denied error, most likely your forgot to source your .bashrc and/or did not logout and login to execute the new lines written in .bashrc.

Locate module hiding places

When Perl is installed, it creates a list, @INC, with the names of all the includes directories. The contents can be reviewed from the command line:

$ perl -le 'print foreach @INC'
/home/ychen64/perl5/lib/perl5/5.16.3/x86_64-linux-thread-multi
/home/ychen64/perl5/lib/perl5/5.16.3
/home/ychen64/perl5/lib/perl5/x86_64-linux-thread-multi
/home/ychen64/perl5/lib/perl5
/usr/local/packages/perl/5.16.3/INTEL-14.0.2/lib/site_perl/5.16.3/x86_64-linux-thread-multi
/usr/local/packages/perl/5.16.3/INTEL-14.0.2/lib/site_perl/5.16.3
/usr/local/packages/perl/5.16.3/INTEL-14.0.2/lib/5.16.3/x86_64-linux-thread-multi
/usr/local/packages/perl/5.16.3/INTEL-14.0.2/lib/5.16.3
      

The /home/ychen64/perl5/lib/perl5 is the local directory saving the installed modules.

Load module

Users have multiple ways of telling Perl where to search for their locally installed modules. For example, PERL5LIB is an envrionment variable saving the name of local Perl directories. It is searched before the contents of @INC. Setting up PERL5LIB will help Perl the search the locally installed modules.

If the module is installed by cpan, PERL5LIB has already been setup in .bashrc. If the module is manually installed (i.e. without cpan), use

$ export PERL5LIB=/your_path_to_the_perl_lib
      

to specify the local directory saving the installed modules.

Alternatively, in the Perl script, the location can be specified by:

#! /usr/bin/perl
use lib "/your_path_to_the_perl_lib"
      

After letting Perl know the local library path, the installed module can be loaded in script. For example, to load Inline::Files, add the following line into the Perl script:

use Inline::Files
      
Start over cpan

Suppose now you have more confidence to manually configure cpan settings by saying "no" in the first question in the cpan dialogue mentioned above (e.g. you wish to assign your perferred installation directory rather than the default /home/your_username/perl5/lib/perl5), you may think changing the environment viriable setting, which is written in your .bashrc or the MyConfig.pm file in ~/.local/share/.cpan/CPAN. But both changes are limited and require you remember those variable definitions. On the other hand, in the cpan dialogue, you will be told how to setup those environment variables interactively. Thus, I recommend that you make use of the cpan dialogue to configure your setting.

However, you will find the cpan configuration dialogue only appears at the first time you use it. To make the dialogue appear again, you need to start over cpan by deleting the related files of cpan in the directory .local/share in your home directory.

Back to top

4.3 Python

Installation

Python modules can be treated as libraries. Each module comes with reusable codes. On our supercomputer clusters several well-known Pyhon modules such as NumPy have been installed system-widely (globally). However, if you prefer to use modules that haven't been installed, or wish to use a certain version of the module rather than the system-wide version on the cluster, you may find the default Python module path is not writable for module installation as you do not have root privilege.

Therefore, you may consider to intall your desired Python modules in your home directory, and the module installation can be done using a Python module called pip. Note the module pip is not provided by the default Python (2.6) on some of our clusters, so a newer version of Python (check our current versions and availability) has to be loaded by softenv or module to use pip. After loading Python, make sure pip is available by some commands such as which pip or pip --version.

Once pip is available, it is important to know the directory saving your local-installed modules. The default location is explained in the Python documentation for the site.USER_SITE and site.USER_BASE variables. The default value of the former is ~/.local/lib/pythonX.Y/site-packages which can be modified by the PYTHONPATH environment variable; while the default value of latter is ~/.local variable which can be modified by the PYTHONUSERBASE enrironment variabe.

Therefore, by updating PYTHONPATH and PYTHONUSERBASE at the same time, you may specify your desired install location in your home directory. For example, if using Python 2.7, to specify a module install location ~/packages/python, following command (when using bash) should be used:

 $ export PYTHONPATH=$HOME/packages/python/lib/python2.7/site-packages:$PYTHONPATH
 $ export PYTHONUSERBASE=$HOME/packages/python
      

The above command lines should be added to the default shell initialization environment by pasting the line to the script such as .bash_profile under your home directory. Even you choose to use the default ~/.local/lib/pythonX.Y/site-packages, you should add export PYTHONPATH=$HOME/.local/lib/python2.7/site-packages:$PYTHONPATH to your default shell initialization environment (.bash_profile). This is because when the location of globally installed modules is also in the PYTHONPATH, sometimes it has precedence over the local path. If that is the case, Python will load the globally installed modules instead of your local modules. In other words, even you have already installed your desired module locally, Python will choose to load the globally installed module, which is the one you don't like, over the local one if you did not add the export PYTHONPATH=$HOME/.local/lib/python2.7/site-packages:$PYTHONPATH to your .bash_profile. Thus, please be careful and make sure the PYTHONPATH contains your local module installation path and it is before the system-wide path.

Once the pip is setup, the newest version of pip should be installed:

 $ pip install --user --upgrade pip
      

The --user option can turn on the site.USER_SITE variable and ask pip to install any new modules (including itself here) into your home directory.

The following command will give you the exact version and location of the installation of pip that your Python is seeing:

 $ python -m pip --version
      

The following command will give you the location of pip in PATH

 $ which pip
      

If the PATH only has system-wide installed pip, while Python is seeing a different pip in your home directory, an error message will occur when trying to lauch pip:

$ pip list
Traceback (most recent call last):
  File ".../pip", line 7, in 
    from pip._internal import main
ImportError: No module named _internal
      

The solution is adding the pip bin(e.g. $HOME/packages/python/bin) to the PATH .

Once pip is available, to list all of the current installed modules, or just list one or several modules. Use the following commands respectively:

 $ pip list      # list all moudles available
 $ pip show module_name # list a module called "module_name"
      

Now you may have a clear view of the modules currently installed. If you cannot find the module you need, or you wish to use a different version, you may use one of the following methods, and remember to include the --user option:

 $ pip install --user module_name           # latest version
 $ pip install --user module_name==x.y.z    # specific version x.y.z
 $ pip install --user 'module_name>=x.y.z'  # minimum version
      

Sometimes after typing pip install module_name, you will got the following message:

 $ pip install --user module_name
Requirement already satisfied: module_name in ./.local/lib/python2.7/site-packages
Requirement already satisfied: python-dateutil in ./.local/lib/python2.7/site-packages
...
...
      

And the installation will be terminated. This is because Python and pip apply a mechanism to manage the modules install. With the mechanism, pip sometimes will consider the globally installed modules already satisfy the installation requirements and does nothing, and reports that "requirement is satisfied". If you need more precise control, you may take a look a the pip user guide about the requirement files. If not, just use the second method (pip install module_name==x.y.z) to specify the module version. If still got this issue, consider to add --upgrade option.

Listing modules

pip list and pip show module_name can be used to list all of the current installed modules and list one or several modules, respectively. For more information, please see pip list and pip show reference pages.

Upgrade/downgrade modules

Add --upgrade option to upgrade module. The module may be downgraded with the same option. In the following the second method is preferred as it is more precise and can be used to downgrade modules:

 $ pip install --upgrade --user module_name           # latest version
 $ pip install --upgrade --user module_name==x.y.z    # specific version x.y.z
      
Uninstallation

Simply use:

 $ pip uninstall module_name
      
Back to top

4.4 Conda (TensorFlow)

Conda

Conda is an open source package management system and environment management system. Besides TensorFlow, Conda can install many other pre-built, reviewed and maintained Python packages.

Prerequisites

Python

The detail of system-wide installed Python can be found here.

A note for all SuperMike-II users: you should start using Module in place of Softenv to add/remove software packages in your user environment and avoid loading Python with Softenv. Click here for more information about how to replace Softenv with Module in your environment.

Alternatively, a Python version installed in user's directory (/home or /work) can be used to install use's own version of TensorFlow. This can avoid some known issues from system-wide installed Python which is not updated frequently and provide newer features and more up-to-date Python libraries than the system-wide installed version. However, users should be aware that the self-installed newer TensorFlow versions might also have unpredicted problems.

If a specific version of Python is needed, Miniconda is recommended. Please refer to 3.4 Python in this webpage.

Python in conda virtual environment

A Conda virtual environment is an isolated working copy of Python which enables multiple side-by-side different installations of Python environments without affecting each other. Therefore it is recommended in situation when a specific python library such as TensorFlow needs to be installed/upgraded. conda can create virtual environment for both Python library dependencies and non-Python library dependencies, the latter do not have a setup.py in their source code and also do not install files into Python's site-packages directory.

The conda virtual environment can be either Python 2 based or Python 3 based and you can create either Python 2.x or Python 3.x conda virtual environment regardless of the current Python module version you are using, that is to say, you can use a current Python 2.x version to create a Python 3.x virtual environment and vice versa.

Some combinations of system-wide installed Python and Python in conda virtual environment are not working well. When using Python 2.7 on QB2 (python/2.7.13-anaconda-tensorflow) and SuperMIC (python/2.7.13-anaconda-tensorflow), PYTHONPATH need to be unset before loading TensorFlow. Also, avoid creating Python 2.x in conda virtual environment on SuperMIC, as TensorFlow loading with Python 2.x in conda virtual environment will fail with issue that cannot be easily fixed.

Disk space used by virtual environment

When creating an environment, conda will download large amount of installation tarballs to the package cache directory. The default is ~/.conda/pkgs in /home directory, however on all HPC and LONI clusters, user's /home directory has 5GB quota. Therefore, instead of using this default setting, the downloaded package cache directories needs to be redirected to a different location, which can be specified in the .condarc file. Below steps listed steps for redirecting the package cache location:

Edit conda user config file: ~/.condarc to redirect the cache directories. For example, after openning ~/.condarc, add following lines to change the cache directories to /work:

envs_dirs:
- /work/your_username/test-env/envs
pkgs_dirs:
- /work/your_username/test-env/pkgs
      

Installation

The TensorFlow installation process can be computational intensive so make sure you do this on a compute node through interactive job.

Due to possible package conflicts that might cause various issues durning the installation and runtime of TensorFlow. Before trying to install tensorflow through conda environment, do a module purge to cleanup the current environment:

$ module purge

Then load a Python module. For example, on SuperMIC:

$ module load python/3.6.2-anaconda-tensorflow

Check conda information. If conda is available, you will see:

$ conda info
Current conda install:

               platform : linux-64
               conda version : 4.3.27
       ...
       ...
       envs directories : /work/your_username/test-env/envs
                          /home/your_username/.conda/envs
                          /usr/local/packages/python/3.6.2-anaconda/envs
          package cache : /work/your_username/test-env/pkgs
      ...

Also make sure the cache directories (envs directories & package cache in the conda info result) have been redirected to another directory. (/work/your_username/test-env/pkgs in the example above) If not, refer to the method mentioned in the above Disk space used by virtual environment section.

Next is to create conda virtual environment. Now create a conda environment called "tf" by:

$ conda create -n tf python=3.6
$ unset PYTHONPATH
$ unset PYTHONHOME
      

The conda environment name is arbitrary, it can be named as "tf-1.8.0", "tensorflow-1.8.0" etc. The command above creates a Python 3.6 conda environment tf. If Python 2.7 conda environment is preferred, change the arguement python=3.6 to python=2.7 in the command line. Note files in the work directory are subject to be purged after 60-90 days. If /project is available to the user, the conda virtual environment should be created in /project .

The the conda environment should then be activated before installing TensorFlow (note in some new version of conda, the command is conda activate tf):

$ source activate tf

Finally TensorFlow can be installed from the anaconda channel:

$ conda install -c anaconda tensorflow

If a specific version of TensorFlow is needed, add the version after the tensorflow keyword above. For example, to install TensorFlow 1.8:

$ conda install -c anaconda tensorflow=1.8

If gpu is needed to run TensorFlow, change the arguement tensorflow to tensorflow-gpu in the command line:

$ conda install -c anaconda tensorflow-gpu

Only compute nodes with GPU installed can be used for tensorflow-gpu (SuperMike-II: shelob queue; SuperMIC: hybrid queue or v100 queue; QB2: checkpt queue or workq queue). The nvidia-smi command can be used to monitor the GPU devices.

Please remember never run TensorFlow on the headnode!

Test if you can load TensorFlow:

$ python
Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>> tensorflow.VERSION
'1.12.0'

Pay attention to the TensorFlow version you got. If you found a much lower version than you expected, then the system-widely installed TensorFlow is loaded by mistake. Unset the value of PYTHONPATH can fix this issue.

Known issues

conda environment missing from list

Found on QB2 and SuperMIC. The description of the issue in detail can be found here. This issue was resolved in conda 4.4. Unfortunately the system-wide installed Python on QB2 and SuperMIC has conda 4.3, so after typing conda env list the conda environment will not be listed. The conda environment, however can always be found in the file ~/.conda/environments.txt.

$ cd ~/.conda
$ cat environments.txt
      


"AttributeError: module 'enum' has no attribute 'IntFlag'"

Using system-wide installed Python 2.7 on QB2 (python/2.7.13-anaconda-tensorflow) and SuperMIC (python/2.7.13-anaconda-tensorflow), no matter which verion of Python in conda virtual environment is created, will give "AttributeError: module 'enum' has no attribute 'IntFlag'" when loading TensorFlow. Unset the value of PYTHONPATH can fix this issue.

Failure of CUDA session creation

When TensorFlow-gpu is installed, there is recent upgrade of tensorflow version in the conda anaconda channel to cudatoolkit version 9. when default command was used to install TensorFlow, the conda virtual environment will use the default condatoolkit version 9 which also requires higher version of nvidia driver which will cause failure of CUDA session creation. This kind of issue will apprear in the future versions of TensorFlow's installation as long as there is mismatch between cudatoolkit and the nvidia driver.

Solution is to specify the cudatoolkit version during installation, below is an example (also you can try different version combinations, some combinations might fail).

$ conda install -c anaconda cudatoolkit=8 tensorflow-gpu=1.7 tensorflow-gpu-base=1.7
      


"libstdc++.so.6: version `GLIBCXX_3.4.22' not found"

If gcc 4.9 on SuperMIC or QB2 was loaded during the TensorFlow installation, when loading TensorFlow, it will give "libstdc++.so.6: version `GLIBCXX_3.4.22' not found". Need unload gcc 4.9 in Module when using TensorFlow.


"libtensorflow_framework.so: symbol GOMP_parallel, version VERSION not defined in file libiomp5.so with link time reference"

Using system-wide installed Python 2.7 on SuperMIC(python/2.7.13-anaconda-tensorflow) along with Python 2.7 in conda virtual environment for installing TensorFlow 1.12 will give the error above. It is not fixable at this time.


"TypeError: __new__() got an unexpected keyword argument 'file'" (TensorFlow 1.8)
"TypeError: __new__() got an unexpected keyword argument 'serialized_options'" (TensorFlow 1.12)

Using system-wide installed Python 2.7 or Python 3.6 on SuperMIC along with Python 2.7 in conda virtual environment for installing TensorFlow 1.8 will give the error above. Python 3.6 on SuperMIC along with Python 2.7 in conda virtual environment for installing TensorFlow 1.12 will also give the similar error. There are suggestions online to upgrade/downgrade protobuf but they didn't work. So this issue is not fixable at this time.

Tutorial and test code

Tutorial and test code are available on our website. Check our latest training sessions about TensorFlow and other deep learning tools here.

Back to top