Skip to main content

openfoam

Versions and Availability

About the Software

OpenFOAM is a GPL-opensource C++ CFD-toolbox. This offering is supported by OpenCFD Ltd, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM trademark. OpenCFD Ltd has been developing and releasing OpenFOAM since its debut in 2004.

Usage

Running OpenFOAM through singularity container

Support of using OpenFOAM on LSU and LONI clusters will mainly use the singularity image in the future. OpenFOAM singularity images are currently built under the /home/admin/singularity directory. Example scripts showing how to run OpenFOAM through singularity with MPI support are posted on this GitHub link: https://github.com/lsuhpchelp/singularity/tree/main/recipes/openfoam/10/cavity.of10 (this link uses OpenFOAM10)

Set up your environment to run OpenFOAM

To load OpenFOAM to your environment on clusters using softenv, follow the two steps:

  1. Add the corresponding softenv key to your .soft file and resoft: you can use the "softenv -k OpenFOAM" command to find out what the keys are;
  2. The openfoam key makes some changes to the default path which will override the path set by the @default key, please make sure you put openfoam key before the @default key in your .soft file.

To load OpenFOAM to your environment on clusters using module, follow the two steps:

  1. Use the moduel load openfoam/2.3.0/INTEL-140-MVAPICH2-2.0 command to load OpenFOAM to your environment, you can also add this command to your ~/.modules file. You can use the "module av openfoam" command to find out what the module keys are. To query the openfoam module key, use moduel disp openfoam/2.3.0/INTEL-140-MVAPICH2-2.0.
  2. Source the openfoam bashrc file by using the source $FOAM_BASH command.

Sample script

#!/bin/bash
#PBS -A your_allocation
#PBS -q checkpt
#PBS -l nodes=1:ppn=16
#PBS -l walltime=12:00:00
#PBS -V
#PBS -j oe
#PBS -N openfoam_test

NPROCS=`wc -l < $PBS_NODEFILE`
cd /path/to/your/openfoam/case/dir
mpirun -np $NPROCS -machinefile $PBS_NODEFILE {openfoamSolverName} -parallel

The script is then submitted using qsub:

$ qsub job_script

where job_script is the name you gave the script file.

▶ QSub FAQ?

Portable Batch System: qsub

qsub

All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.

Usage
$ qsub job_script

Where job_script is the name of the file containing the script.

PBS Directives

PBS directives take the form:

#PBS -X value

Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.

Example Job Script
 #!/bin/bash
 #
 # Use "workq" as the job queue, and specify the allocation code.
 #
 #PBS -q workq
 #PBS -A your_allocation_code
 # 
 # Assuming you want to run 16 processes, and each node supports 4 processes, 
 # you need to ask for a total of 4 nodes. The number of processes per node 
 # will vary from machine to machine, so double-check that your have the right 
 # values before submitting the job.
 #
 #PBS -l nodes=4:ppn=4
 # 
 # Set the maximum wall-clock time. In this case, 10 minutes.
 #
 #PBS -l walltime=00:10:00
 # 
 # Specify the name of a file which will receive all standard output,
 # and merge standard error with standard output.
 #
 #PBS -o /scratch/myName/parallel/output
 #PBS -j oe
 # 
 # Give the job a name so it can be easily tracked with qstat.
 #
 #PBS -N MyParJob
 #
 # That is it for PBS instructions. The rest of the file is a shell script.
 # 
 # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS:
 #
 #   1. Copy the necessary files from your home directory to your scratch directory.
 #   2. Execute in your scratch directory.
 #   3. Copy any necessary files back to your home directory.

 # Let's mark the time things get started.

 date

 # Set some handy environment variables.

 export HOME_DIR=/home/$USER/parallel
 export WORK_DIR=/scratch/myName/parallel
 
 # Set a variable that will be used to tell MPI how many processes will be run.
 # This makes sure MPI gets the same information provided to PBS above.

 export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`

 # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro".

 cp $HOME_DIR/hydro $WORK_DIR
 cd $WORK_DIR
 mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro

 # Mark the time processing ends.

 date
 
 # And we're out'a here!

 exit 0

Resources

Last modified: November 30 2023 22:50:15.