Skip to main content

amber

About

"Amber" refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos.

Amber is distributed in two parts: AmberTools and Amber. You can use AmberTools without Amber, but not vice versa.

Amber 18 is compiled with AmberTools 18

When citing Amber18 or AmberTools18 please use the following: D.A. Case, I.Y. Ben-Shalom, S.R. Brozell, D.S. Cerutti, T.E. Cheatham, III, V.W.D. Cruzeiro, T.A. Darden, R.E. Duke, D. Ghoreishi, M.K. Gilson, H. Gohlke, A.W. Goetz, D. Greene, R Harris, N. Homeyer, S. Izadi, A. Kovalenko, T. Kurtzman, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, D.J. Mermelstein, K.M. Merz, Y. Miao, G. Monard, C. Nguyen, H. Nguyen, I. Omelyan, A. Onufriev, F. Pan, R. Qi, D.R. Roe, A. Roitberg, C. Sagui, S. Schott-Verdugo, J. Shen, C.L. Simmerling, J. Smith, R. Salomon-Ferrer, J. Swails, R.C. Walker, J. Wang, H. Wei, R.M. Wolf, X. Wu, L. Xiao, D.M. York and P.A. Kollman (2018), AMBER 2018, University of California, San Francisco.

Amber 16 is compiled with AmberTools 17

When citing Amber16 or AmberTools17 please use the following: D.A. Case, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, D. Greene, N. Homeyer, S. Izadi, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, D. Mermelstein, K.M. Merz, G. Monard, H. Nguyen, I. Omelyan, A. Onufriev, F. Pan, R. Qi, D.R. Roe, A. Roitberg, C. Sagui, C.L. Simmerling, W.M. Botello-Smith, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, L. Xiao, D.M. York and P.A. Kollman (2017), AMBER 2017, University of California, San Francisco.

Versions and Availability

Softenv Keys for amber on supermike2
Machine Version Softenv Key
supermike2 12.0 +amber-12-Intel-13.0.0-openmpi-1.6.2-CUDA-4.2.9
supermike2 14.0 +amber-14-Intel-13.0.0-openmpi-1.6.2-CUDA-5.0
▶ Softenv FAQ?

The information here is applicable to LSU HPC and LONI systems.

Shells

A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.

/bin/bash

System resource file: /etc/profile

When one access the shell, the following user files are read in if they exist (in order):

  1. ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
  2. ~/.bashrc (interactive login only)
  3. ~/.profile

When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.

The default value of the environmental variable, PATH, is set automatically using SoftEnv. See below for more information.

/bin/tcsh

The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.

Softenv

SoftEnv is a utility that is supposed to help users manage complex user environments with potentially conflicting application versions and libraries.

System Default Path

When a user logs in, the system /etc/profile or /etc/csh.cshrc (depending on login shell, and mirrored from csm:/cfmroot/etc/profile) calls /usr/local/packages/softenv-1.6.2/bin/use.softenv.sh to set up the default path via the SoftEnv database.

SoftEnv looks for a user's ~/.soft file and updates the variables and paths accordingly.

Viewing Available Packages

The command softenv will provide a list of available packages. The listing will look something like:

$ softenv
These are the macros available:
*   @default
These are the keywords explicitly available:
+amber-8                       Applications: 'Amber', version: 8 Amber is a
+apache-ant-1.6.5              Ant, Java based XML make system version: 1.6.
+charm-5.9                     Applications: 'Charm++', version: 5.9 Charm++
+default                       this is the default environment...nukes /etc/
+essl-4.2                      Libraries: 'ESSL', version: 4.2 ESSL is a sta
+gaussian-03                   Applications: 'Gaussian', version: 03 Gaussia
... some stuff deleted ...
Managing SoftEnv

The file ~/.soft in the user's home directory is where the different packages are managed. Add the +keyword into your .soft file. For instance, ff one wants to add the Amber Molecular Dynamics package into their environment, the end of the .soft file should look like this:

+amber-8

@default

To update the environment after modifying this file, one simply uses the resoft command:

% resoft

The command soft can be used to manipulate the environment from the command line. It takes the form:

$ soft add/delete +keyword

Using this method of adding or removing keywords requires the user to pay attention to possible order dependencies. That is, best results require the user to remove keywords in the reverse order in which they were added. It is handy to test out individual keys, but can lead to trouble if changing multiple keys. Changing the .soft file and issuing the resoft is the recommended way of dealing with multiple changes.

Usage

Make sure Module keys are matched up with corresponding versions of the compiler and MPI library. For instance on the SuperMike2, SuperMIC or Queenbee2:

module load amber/18/INTEL-170-MVAPICH2-2.2

MPI

Note: the usual executable name used is pmemd (serial, not recommended) or pmemd.MPI (parallel).

pmemd and pmemd.MPI in Amber 18 was built with Intel 17.0.0 and mvapich2 2.2. The Module key "amber/18/INTEL-170-MVAPICH2-2.2" will load corresponding versions of the compiler and MPI library as dependencies. Other versions of Intel compiler and mpi compilers should be removed from the Module list before loading the Amber 18 module key.

On SuperMike2, SuperMIC and QB2, use "pmemd.MPI" to run Amber. Below is a sample script which runs Amber with 2 nodes (40 CPU cores):

	#!/bin/bash
	#PBS -A my_allocation
	#PBS -q checkpt
	#PBS -l nodes=2:ppn=20
	#PBS -l walltime=HH:MM:SS
	#PBS -j oe
	#PBS -N JOB_NAME
	#PBS -V

	cd $PBS_O_WORKDIR
	mpirun -np 40 $AMBERHOME/bin/pmemd.MPI -O -i mdin.CPU -o mdout -p prmtop -c inpcrd
    

GPU acceleration

Note: the usual executable name used for Amber 16 and Amber 18 GPU acceleration is pmemd.cuda (serial) or pmemd.cuda.MPI (parallel).

pmemd.cuda and pmemd.cuda.MPI in Amber 16 was built with Intel 15.0.0 and CUDA 7.5. Please load Intel 15.0.0 compiler and CUDA 7.5 into your user environment in order to run pmemd.cuda.

pmemd.cuda and pmemd.cuda.MPI in Amber 18 was built with Intel 17.0.0, mvapich2 2.2 and CUDA 9. The Module key "amber/18/INTEL-170-MVAPICH2-2.2" will load these dependencies. Other versions of Intel compiler, mpi compilers and cuda should be removed from the Module list before loading the Amber 18 module key.

Please do not attempt to run regular GPU MD runs across multiple nodes. Infiniband is way too slow these days to keep up with the computation speed of the GPUs.

Using hybrid or v100 queue is required if running gpu simulation with Amber 18 on SuperMIC.

On SuperMIC and QB2, use "pmemd.cuda" to run Amber 16 with GPU acceleration in serial. Below is a sample script which runs Amber 16 on 1 node:

		#!/bin/bash
		#PBS -A my_allocation
		#PBS -q hybrid
		#PBS -l nodes=1:ppn=20
		#PBS -l walltime=HH:MM:SS
		#PBS -j oe
		#PBS -N JOB_NAME
		#PBS -V

		cd $PBS_O_WORKDIR
		$AMBERHOME/bin/pmemd.cuda -O -i mdin.GPU -o mdout_gpu -p prmtop -c inpcrd
    

GPU acceleration must use hybrid or v100 node if using SuperMIC. Note pmemd.cuda is a serial program, so no parallel exe such as mpirun is required, or set mpirun -np 1

On QB2 , as each compute node has two GPUs, "pmemd.cuda.MPI" can be used to run Amber 16 with GPU acceleration in parallel. Below is a sample script which runs Amber 16 on 1 node (2 GPUs) on QB2:

  		#!/bin/bash
  		#PBS -A my_allocation
  		#PBS -q hybrid
  		#PBS -l nodes=1:ppn=20
  		#PBS -l walltime=HH:MM:SS
  		#PBS -j oe
  		#PBS -N JOB_NAME
  		#PBS -V

  		cd $PBS_O_WORKDIR
		mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i mdin.GPU -o mdout_2gpu -p prmtop -c inpcrd -ref inpcrd
      

Use -np # where # is the number of GPUs you are requesting, NOT the number of CPUs. Note pmemd.cuda.MPI is significantly faster than pmemd.cuda only for the production run of a large model.

Resources

  • The Amber Home Page has a variety of on-line resources available, including manuals and tutorials.

Last modified: February 01 2019 15:17:39.