"Amber" refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos.
Amber is distributed in two parts: AmberTools and Amber. You can use AmberTools without Amber, but not vice versa.
Amber 18 is compiled with AmberTools 18
When citing Amber18 or AmberTools18 please use the following: D.A. Case, I.Y. Ben-Shalom, S.R. Brozell, D.S. Cerutti, T.E. Cheatham, III, V.W.D. Cruzeiro, T.A. Darden, R.E. Duke, D. Ghoreishi, M.K. Gilson, H. Gohlke, A.W. Goetz, D. Greene, R Harris, N. Homeyer, S. Izadi, A. Kovalenko, T. Kurtzman, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, D.J. Mermelstein, K.M. Merz, Y. Miao, G. Monard, C. Nguyen, H. Nguyen, I. Omelyan, A. Onufriev, F. Pan, R. Qi, D.R. Roe, A. Roitberg, C. Sagui, S. Schott-Verdugo, J. Shen, C.L. Simmerling, J. Smith, R. Salomon-Ferrer, J. Swails, R.C. Walker, J. Wang, H. Wei, R.M. Wolf, X. Wu, L. Xiao, D.M. York and P.A. Kollman (2018), AMBER 2018, University of California, San Francisco.
Amber 16 is compiled with AmberTools 17
When citing Amber16 or AmberTools17 please use the following: D.A. Case, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, D. Greene, N. Homeyer, S. Izadi, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, D. Mermelstein, K.M. Merz, G. Monard, H. Nguyen, I. Omelyan, A. Onufriev, F. Pan, R. Qi, D.R. Roe, A. Roitberg, C. Sagui, C.L. Simmerling, W.M. Botello-Smith, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, L. Xiao, D.M. York and P.A. Kollman (2017), AMBER 2017, University of California, San Francisco.
Versions and Availability
Module Names for amber on qb
▶ Module FAQ?
The information here is applicable to LSU HPC and LONI systems.
A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.
System resource file: /etc/profile
When one access the shell, the following user files are read in if they exist (in order):
- ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
- ~/.bashrc (interactive login only)
When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.
The default value of the environmental variable, PATH, is set automatically using SoftEnv. See below for more information.
The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.
Modules is a utility which helps users manage the complex business of setting up their shell environment in the face of potentially conflicting application versions and libraries.
When a user logs in, the system looks for a file named .modules in their home directory. This file contains module commands to set up the initial shell environment.
Viewing Available Modules
$ module avail
displays a list of all the modules available. The list will look something like:
--- some stuff deleted --- velvet/1.2.10/INTEL-14.0.2 vmatch/2.2.2 ---------------- /usr/local/packages/Modules/modulefiles/admin ----------------- EasyBuild/1.11.1 GCC/4.9.0 INTEL-140-MPICH/3.1.1 EasyBuild/1.13.0 INTEL/14.0.2 INTEL-140-MVAPICH2/2.0 --- some stuff deleted ---
The module names take the form appname/version/compiler, providing the application name, the version, and information about how it was compiled (if needed).
Besides avail, there are other basic module commands to use for manipulating the environment. These include:
add/load mod1 mod2 ... modn . . . Add modules rm/unload mod1 mod2 ... modn . . Remove modules switch/swap mod . . . . . . . . . Switch or swap one module for another display/show . . . . . . . . . . List modules loaded in the environment avail . . . . . . . . . . . . . . List available module names whatis mod1 mod2 ... modn . . . . Describe listed modules
The -h option to module will list all available commands.
Module is currently available only on SuperMIC.
Make sure Module keys are matched up with corresponding versions of the compiler and MPI library. For instance on the SuperMike2, SuperMIC or Queenbee2:
module load amber/18/INTEL-170-MVAPICH2-2.2
Note: the usual executable name used is pmemd (serial, not recommended) or pmemd.MPI (parallel).
pmemd and pmemd.MPI in Amber 18 was built with Intel 17.0.0 and mvapich2 2.2. The Module key "amber/18/INTEL-170-MVAPICH2-2.2" will load corresponding versions of the compiler and MPI library as dependencies. Other versions of Intel compiler and mpi compilers should be removed from the Module list before loading the Amber 18 module key.
On SuperMike2, SuperMIC and QB2, use "pmemd.MPI" to run Amber. Below is a sample script which runs Amber with 2 nodes (40 CPU cores):
#!/bin/bash #PBS -A my_allocation #PBS -q checkpt #PBS -l nodes=2:ppn=20 #PBS -l walltime=HH:MM:SS #PBS -j oe #PBS -N JOB_NAME #PBS -V cd $PBS_O_WORKDIR mpirun -np 40 $AMBERHOME/bin/pmemd.MPI -O -i mdin.CPU -o mdout -p prmtop -c inpcrd
Note: the usual executable name used for Amber 16 and Amber 18 GPU acceleration is pmemd.cuda (serial) or pmemd.cuda.MPI (parallel).
pmemd.cuda and pmemd.cuda.MPI in Amber 16 was built with Intel 15.0.0 and CUDA 7.5. Please load Intel 15.0.0 compiler and CUDA 7.5 into your user environment in order to run pmemd.cuda.
pmemd.cuda and pmemd.cuda.MPI in Amber 18 was built with Intel 17.0.0, mvapich2 2.2 and CUDA 9. The Module key "amber/18/INTEL-170-MVAPICH2-2.2" will load these dependencies. Other versions of Intel compiler, mpi compilers and cuda should be removed from the Module list before loading the Amber 18 module key.
Please do not attempt to run regular GPU MD runs across multiple nodes. Infiniband is way too slow these days to keep up with the computation speed of the GPUs.
Using hybrid or v100 queue is required if running gpu simulation with Amber 18 on SuperMIC.
On SuperMIC and QB2, use "pmemd.cuda" to run Amber 16 with GPU acceleration in serial. Below is a sample script which runs Amber 16 on 1 node:
#!/bin/bash #PBS -A my_allocation #PBS -q hybrid #PBS -l nodes=1:ppn=20 #PBS -l walltime=HH:MM:SS #PBS -j oe #PBS -N JOB_NAME #PBS -V cd $PBS_O_WORKDIR $AMBERHOME/bin/pmemd.cuda -O -i mdin.GPU -o mdout_gpu -p prmtop -c inpcrd
GPU acceleration must use hybrid or v100 node if using SuperMIC. Note pmemd.cuda is a serial program, so no parallel exe such as mpirun is required, or set mpirun -np 1
On QB2 , as each compute node has two GPUs, "pmemd.cuda.MPI" can be used to run Amber 16 with GPU acceleration in parallel. Below is a sample script which runs Amber 16 on 1 node (2 GPUs) on QB2:
#!/bin/bash #PBS -A my_allocation #PBS -q hybrid #PBS -l nodes=1:ppn=20 #PBS -l walltime=HH:MM:SS #PBS -j oe #PBS -N JOB_NAME #PBS -V cd $PBS_O_WORKDIR mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i mdin.GPU -o mdout_2gpu -p prmtop -c inpcrd -ref inpcrd
Use -np # where # is the number of GPUs you are requesting, NOT the number of CPUs. Note pmemd.cuda.MPI is significantly faster than pmemd.cuda only for the production run of a large model.
- The Amber Home Page has a variety of on-line resources available, including manuals and tutorials.
Last modified: February 01 2019 15:17:39.