Skip to main content

openmp

Versions and Availability

▶ Display Module Names for openmp on all clusters.

Machine Version Module
None Available N/A N/A

▶ Module FAQ?

The information here is applicable to LSU HPC and LONI systems.

h4

Shells

A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.

/bin/bash

System resource file: /etc/profile

When one access the shell, the following user files are read in if they exist (in order):

  1. ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
  2. ~/.bashrc (interactive login only)
  3. ~/.profile

When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.

The default value of the environmental variable, PATH, is set automatically using Modules. See below for more information.

/bin/tcsh

The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.

Modules

Modules is a utility which helps users manage the complex business of setting up their shell environment in the face of potentially conflicting application versions and libraries.

Default Setup

When a user logs in, the system looks for a file named .modules in their home directory. This file contains module commands to set up the initial shell environment.

Viewing Available Modules

The command

$ module avail

displays a list of all the modules available. The list will look something like:

--- some stuff deleted ---
velvet/1.2.10/INTEL-14.0.2
vmatch/2.2.2

---------------- /usr/local/packages/Modules/modulefiles/admin -----------------
EasyBuild/1.11.1       GCC/4.9.0              INTEL-140-MPICH/3.1.1
EasyBuild/1.13.0       INTEL/14.0.2           INTEL-140-MVAPICH2/2.0
--- some stuff deleted ---

The module names take the form appname/version/compiler, providing the application name, the version, and information about how it was compiled (if needed).

Managing Modules

Besides avail, there are other basic module commands to use for manipulating the environment. These include:

add/load mod1 mod2 ... modn . . . Add modules
rm/unload mod1 mod2 ... modn  . . Remove modules
switch/swap mod . . . . . . . . . Switch or swap one module for another
display/show  . . . . . . . . . . List modules loaded in the environment
avail . . . . . . . . . . . . . . List available module names
whatis mod1 mod2 ... modn . . . . Describe listed modules

The -h option to module will list all available commands.

▶ Did not find the version you want to use??

If a software package you would like to use for your research is not available on a cluster, you can request it to be installed. The software requests are evaluated by the HPC staff on a case-by-case basis. Before you send in a software request, please go through the information below.

h3

Types of request

Depending on how many users need to use the software, software requests are divided into three types, each of which corresponds to the location where the software is installed:

  • The user's home directory
    • Software packages installed here will be accessible only to the user.
    • It is suitable for software packages that will be used by a single user.
    • Python, Perl and R modules should be installed here.
  • /project
    • Software packages installed in /project can be accessed by a group of users.
    • It is suitable for software packages that
      • Need to be shared by users from the same research group, or
      • are bigger than the quota on the home file syste.
    • This type of request must be sent by the PI of the research group, who may be asked to apply for a storage allocation.
  • /usr/local/packages
    • Software packages installed under /usr/local/packages can be accessed by all users.
    • It is suitable for software packages that will be used by users from multiple research groups.
    • This type of request must be sent by the PI of a research group.

h3

How to request

Please send an email to sys-help@loni.org with the following information:

  • Your user name
  • The name of cluster where you want to use the requested software
  • The name, version and download link of the software
  • Specific installation instructions if any (e.g. compiler flags, variants and flavor, etc.)
  • Why the software is needed
  • Where the software should be installed (locally, /project, or /usr/local/packages) and justification explaining how many users are expected.

Please note that, once the software is installed, testing and validation are users' responsibility.

About the Software

Description Path: key[@id="openmp"]/whatis Not Found!

Usage

OpenMP represents a programming methodology that is supported by the compilers found on the local clusters. It is a complex topic that can't be covered here. Please visit http://www.openmp.org/ for details on how to program with OpenMP.

Interactive Jobs

This sequence of commands illustrates how an OpenMP job can be run on an interactive compute node. The node is requested via QSub, and 4 processors are assumed to be available. (This number depends on the machine being used.). OMP_NUM_THREADS is used to specify how many threads to use when executing.

▶ Open Example?

$ qsub -I -l nodes=1:ppn=4 -l walltime=00:10:00
$ export OMP_NUM_THREADS=4
$ ./program.ex

Batch Jobs

Running as a PBS job in batch mode requires a script such as the following. Be sure to adjust the PBS options to match the requirements on the machine being used.

▶ Open Example?

#!/bin/bash
#
# No shell commands until PBS is set up.
#
# Use the default job queue (workq):
#PBS -q checkpt 
#
# Specify your project allocation code
#PBS -A ALLOCATION_CODE
#
# Set number of nodes and number of processors on each 
# node to be used. Use ppn=8 for QB, and ppn=4 for all other x86s.
#PBS -l nodes=1:ppn=4 
#
# Set amount of time job is allowed to run in hh:mm:ss
#PBS -l walltime=00:15:00 
#
# Send stdout messages to a named file:
#PBS -o OUT_NAME 
#
# Merge stderr messages with stdout.
#PBS -j oe 
#
# Give job a name for easier tracking.
#PBS -N JOB_NAME
#
# Shell commands may begin here.

cd /path/to/your/executable
export OMP_NUM_THREADS=4
./program.ex

Hybrid Jobs

Since OpenMP threads a limited to a single node, the limit is the number of processing cores on the node. However, it is possible to combine OpenMP with MPI and distribute work across more nodes. OpenMP is used for processing the work on a node, and MPI is used to exchange information between nodes. This can get complicated, and execution requires a little more finesse. The important thing to keep in mind is that only one MPI task is run per node, so some steps are necessary to make sure only one copy of the program is started on each node. The follow PBS script is an example of how to accomplish this.

▶ Open Example?

#!/bin/bash
#
# No shell commands until PBS is set up.
#
# Use the default job queue:
#PBS -q workq
#
# Specify the appropriate project allocation code
#PBS -A ALLOCATION_CODE
#
# Set number of nodes and number of processors on each
# node to be used: ppn=8 for QB, and ppn=4 for all other x86s.
#PBS -l nodes=4:ppn=4 
#
# Set time job is allowed to run in hh:mm:ss
#PBS -l walltime=00:15:00 
#
# Send stdout messages to a named file:
#PBS -o OUT_NAME 
#
# Merge stderr messages with stdout.
#PBS -j oe 
#
# Give job a name for easier tracking:
#PBS -N JOBNAME
#
# Shell commands many begin here.

export WORK_DIR=/work/uname/path
cd $WORK_DIR
cat $PBS_NODEFILE | uniq > ./mpd_nodefile_$USER

# get number of MPI processes 
export NPROCS=`wc -l mpd_nodefile_$USER |gawk '//{print $1}'`
# setting number of OpenMP threads (8 for QB, 4 otherwise)
export OMP_NUM_THREADS=4
ulimit -s hard

# launch your hybrid applications 
mpirun_rsh -np $NPROCS -hostfile mpd_nodefile_$USER \
       OMP_NUM_THREADS=$OMP_NUM_THREADS ./program.ex

Resources

  • OpenMP home page with links to the OpenMP specification and related information.

Last modified: September 10 2020 11:58:50.