Skip to main content

mvapich

About

MVAPICH is an implementation of the MPI 1.2 Message Passing Interface standard. It is produced by the Network-Based Computing Labortory, Dept of Computer Science and Engineering, The Ohio State University. It is optimized for use over InfiniBand interconnects and related protocols.

Note: The MPI 1.2 library is provided primarily for backwards compatibility. New software should be developed with MVAPICH2.

Version and Availability

Softenv Keys for mvapich on eric
Machine Version Softenv Key
eric 0.98 +mvapich-0.98-pgi-6.1
eric 1.1 +mvapich-1.1-gcc-4.3.2
eric 1.1 +mvapich-1.1-intel-11.1
▶ Display Softenv Keys for mvapich all clusters
Machine Version Softenv Key
eric 0.98 +mvapich-0.98-pgi-6.1
eric 1.1 +mvapich-1.1-gcc-4.3.2
eric 1.1 +mvapich-1.1-intel-11.1
qb 0.98 +mvapich-0.98-pgi-6.1
qb 1.1 +mvapich-1.1-gcc-4.3.2
qb 1.1 +mvapich-1.1-intel-10.1
qb 1.1 +mvapich-1.1-intel-11.1
oliver 0.98 +mvapich-0.98-pgi-6.1
oliver 1.1 +mvapich-1.1-gcc-4.3.2
oliver 1.1 +mvapich-1.1-intel-11.1
louie 0.98 +mvapich-0.98-pgi-6.1
louie 1.1 +mvapich-1.1-gcc-4.3.2
louie 1.1 +mvapich-1.1-intel-11.1
poseidon 0.98 +mvapich-0.98-pgi-6.1
poseidon 1.1 +mvapich-1.1-gcc-4.3.2
poseidon 1.1 +mvapich-1.1-intel-11.1
painter 0.98 +mvapich-0.98-pgi-6.1
painter 1.1 +mvapich-1.1-gcc-4.3.2
painter 1.1 +mvapich-1.1-intel-11.1
▶ Softenv FAQ?

The information here is applicable to LSU HPC and LONI systems.

Softenv

SoftEnv is a utility that is supposed to help users manage complex user environments with potentially conflicting application versions and libraries.

System Default Path

When a user logs in, the system /etc/profile or /etc/csh.cshrc (depending on login shell, and mirrored from csm:/cfmroot/etc/profile) calls /usr/local/packages/softenv-1.6.2/bin/use.softenv.sh to set up the default path via the SoftEnv database.

SoftEnv looks for a user's ~/.soft file and updates the variables and paths accordingly.

Viewing Available Packages

Using the softenv command, a user may view the list of available packages. Currently, it can not be ensured that the packages shown are actually available or working on the particular machine. Every attempt is made to present an identical environment on all of the LONI clusters, but sometimes this is not the case.

Example,

$ softenv
These are the macros available:
*   @default
These are the keywords explicitly available:
+amber-8                       Applications: 'Amber', version: 8 Amber is a
+apache-ant-1.6.5              Ant, Java based XML make system version: 1.6.
+charm-5.9                     Applications: 'Charm++', version: 5.9 Charm++
+default                       this is the default environment...nukes /etc/
+essl-4.2                      Libraries: 'ESSL', version: 4.2 ESSL is a sta
+gaussian-03                   Applications: 'Gaussian', version: 03 Gaussia
....
Listing of Available Packages

See Packages Available via SoftEnv on LSU HPC and LONI.

For a more accurate, up to date list, use the softenv command.

Caveats

Currently there are some caveats to using this tool.

  1. packages might be out of sync between what is listed and what is actually available
  2. resoft and soft utilities are not; to update the environment for now, log out and login after modifying the ~/.soft file.
Availability

softenv is available on all LSU HPC and LONI clusters to all users in both interactive login sessions (i.e., just logging into the machine) and the batch environment created by the PBS job scheduler on Linux clusters and by loadleveler on AIX clusters..

Packages Availability

This information can be viewed using the softenv command:

% softenv
Managing Environment with SoftEnv

The file ~/.soft in the user's home directory is where the different packages are managed. Add the +keyword into your .soft file. For instance, ff one wants to add the Amber Molecular Dynamics package into their environment, the end of the .soft file should look like this:

+amber-8

@default

To update the environment after modifying this file, one simply uses the resoft command:

% resoft

Usage

  1. Set up your .soft file to select the library version, and the compilers you want to use for building and executing your code. Keep in mind that keys take effect in the order they appear. The following shows how to select and MVAPICH library and use it with the GNU gcc compiler. Do not simply copy them, as they are subject to change. Use the softenv command to verify them before use.
  2. +mvapich-1.1-gcc-4.3.2 
    +gcc-4.3.2
    @default
    
  3. The compiler, mpicc, will use gcc and link with mvapich with no further ado.
  4. Run with: mpirun -machinefile $PBS_NODEFILE -np $NPROCS /path/to/executable
  5. An example PBS script can be viewed below.
  6. ▶ Open Example?
    #!/bin/sh
    #
    # No shell commands until PBS setup is completed!
    #
    # Provide your allocation code.
    #PBS -A ALLOCATION_CODE
    #
    # "workq" is the default job queue.
    #PBS -q workq
    #
    # Set to your email address.
    #PBS -M EMAIL_ADDRESS
    #
    # PPN should be 4, 8, or 16, depending on machine you are using.
    #PBS -l nodes=1:ppn=4
    #
    # Set amount of time job may run in hh:mm:ss
    #PBS -l walltime=00:10:00
    #
    # Have PBS pass all shell variables to the job environment
    #PBS -V
    #
    # Send stdout and stderr to named files.
    #PBS -o MPI_test.out
    #PBS -e MPI_test.err
    #
    # Give the job a name to make tracking it easier
    #PBS -N MPI_test 
    #
    # Shell commands may begin here.
     
    # Your executable should either be in your path, or defined explicitly.
    # Here we'll assume a custom program named "hello" that exists in the
    # work directory:
    
    export EXEC=hello
    export WORK_DIR=/work/uname/path
    export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`
    cd $WORK_DIR 
     
    # The order in which options are provided is important:
    mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/$EXEC 
    

Resources

  • MVAPICH Home Page
  • User Guides. You may have to use the MVAPICH2 guides, follow the documentation for features common to MPI 1.2 and MPI 2.2, and avoid using features implement only in MPI 2.2.

Last modified: March 08 2013 13:10:23.