Skip to main content

namd

Versions and Availability

About the Software

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 200,000 cores for the largest simulations. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR. - Homepage: http://www.ks.uiuc.edu/Research/namd/

Usage

Depending on which cluster it is installed, NAMD may or may not need MPI to run.

Non-MPI

On SuperMIC, use "charmrun" to run NAMD. Below is a sample script which runs NAMD with 4 nodes (80 CPU cores and 8 Xeon Phi co-processors):

#!/bin/bash

#PBS -A hpc_smictest3
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=20
#PBS -q checkpt

cd $PBS_O_WORKDIR
module add namd/2.10/INTEL-14.0.2-ibverbs-mic

for node in `cat $PBS_NODEFILE | uniq`; do echo host $node; done > hostfile

charmrun ++p 80 ++nodelist ./hostfile ++remote-shell ssh `which namd2` apoa1.namd

	

MPI

Use "mpirun" to run NAMD (e.g. on QB2). Below is a sample script which runs NAMD with 4 nodes (80 CPU cores):

#!/bin/bash

#PBS -A your_allocation_name
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=20
#PBS -q checkpt

cd $PBS_O_WORKDIR
module add namd/2.10b1/CUDA-65-INTEL-140-MVAPICH2-2.0

mpirun -n 80 -f $PBS_NODEFILE `which namd2` apoa1.namd
	

On Super Mike 2, first make sure that the proper keys are loaded in .soft file:

+fftw-3.3.3-Intel-13.0.0-openmpi-1.6.2
+NAMD-2.9-Intel-13.0.0-openmpi-1.6.2
	

Then run NAMD using scripts similar to this one:

#!/bin/bash

#PBS -A hpc_your_allocation
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=16
#PBS -q checkpt

cd $PBS_O_WORKDIR

mpirun -n 64 -hostfile $PBS_NODEFILE `which namd2` apoa1.namd
	

GPU

PBS

To run NAMD with GPU support (e.g. on QB2) on clusters using PBS, please use the below script as a reference, the example data and detailed instructions can be downloaded from the namd tutorial titled "GPU Accelerated Molecular Dynamics Simulation, Visualization, and Analysis" from here.

#PBS -A your_allocation_name
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=20
#PBS -q checkpt

cd $PBS_O_WORKDIR
module add namd/2.10b1/CUDA-65-INTEL-140-MVAPICH2-2.0

nprocs=`wc -l $PBS_NODEFILE | awk '{print $1}'`
mpirun -n $nprocs -f $PBS_NODEFILE /usr/local/packages/namd/2.10b1/CUDA-65-INTEL-140-MVAPICH2-2.0/namd2 apoa1.namd
	

Slurm

On QB3 with Slurm, please use the below script template as a reference:

#!/bin/bash
#SBATCH -p gpu
#SBATCH -N 2
#SBATCH -n 96
#SBATCH -t 00:10:00
#SBATCH -A your_allocation_name

module load namd/2.14b2/intel-19.0.5-cuda
echo "SLURM_NTASKS=$SLURM_NTASKS"
SECONDS=0
srun -n $SLURM_NTASKS $(which namd2) apoa1.namd
echo "took:$SECONDS sec."
	

Resources

Last modified: October 16 2020 14:18:17.