"Amber" refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos.
Amber is distributed in two parts: AmberTools and Amber. You can use AmberTools without Amber, but not vice versa.
Amber 16 is compiled with AmberTools 17
When citing Amber16 or AmberTools17 please use the following: D.A. Case, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, D. Greene, N. Homeyer, S. Izadi, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, C. Lin, J. Liu, T. Luchko, R. Luo, D. Mermelstein, K.M. Merz, G. Monard, H. Nguyen, I. Omelyan, A. Onufriev, F. Pan, R. Qi, D.R. Roe, A. Roitberg, C. Sagui, C.L. Simmerling, W.M. Botello-Smith, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, L. Xiao, D.M. York and P.A. Kollman (2017), AMBER 2017, University of California, San Francisco.
Versions and Availability
Softenv Keys for amber on eric
▶ Softenv FAQ?
The information here is applicable to LSU HPC and LONI systems.
A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.
System resource file: /etc/profile
When one access the shell, the following user files are read in if they exist (in order):
- ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
- ~/.bashrc (interactive login only)
When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.
The default value of the environmental variable, PATH, is set automatically using SoftEnv. See below for more information.
The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.
SoftEnv is a utility that is supposed to help users manage complex user environments with potentially conflicting application versions and libraries.
System Default Path
When a user logs in, the system /etc/profile or /etc/csh.cshrc (depending on login shell, and mirrored from csm:/cfmroot/etc/profile) calls /usr/local/packages/softenv-1.6.2/bin/use.softenv.sh to set up the default path via the SoftEnv database.
SoftEnv looks for a user's ~/.soft file and updates the variables and paths accordingly.
Viewing Available Packages
The command softenv will provide a list of available packages. The listing will look something like:
$ softenv These are the macros available: * @default These are the keywords explicitly available: +amber-8 Applications: 'Amber', version: 8 Amber is a +apache-ant-1.6.5 Ant, Java based XML make system version: 1.6. +charm-5.9 Applications: 'Charm++', version: 5.9 Charm++ +default this is the default environment...nukes /etc/ +essl-4.2 Libraries: 'ESSL', version: 4.2 ESSL is a sta +gaussian-03 Applications: 'Gaussian', version: 03 Gaussia ... some stuff deleted ...
The file ~/.soft in the user's home directory is where the different packages are managed. Add the +keyword into your .soft file. For instance, ff one wants to add the Amber Molecular Dynamics package into their environment, the end of the .soft file should look like this:
To update the environment after modifying this file, one simply uses the resoft command:
The command soft can be used to manipulate the environment from the command line. It takes the form:
$ soft add/delete +keyword
Using this method of adding or removing keywords requires the user to pay attention to possible order dependencies. That is, best results require the user to remove keywords in the reverse order in which they were added. It is handy to test out individual keys, but can lead to trouble if changing multiple keys. Changing the .soft file and issuing the resoft is the recommended way of dealing with multiple changes.
Make sure softenv keys are matched up with corresponding versions of the compiler and MPI library. For instance on the SuperMike:
+amber-14-Intel-13.0.0-openmpi-1.6.2-CUDA-5.0 +openmpi-1.6.2-Intel-13.0.0 +cuda-5.0
Amber is normally run via a PBS job script. To run Amber 16 in batch, remember to include either #PBS -V (when the module key of Amber 16 has already been loaded) or module load amber/16/INTEL-140-MVAPICH2-2.0 in the PBS script.
Note: the usual executable name used is pmemd (serial, not recommended) or pmemd.MPI (parallel).
On SuperMIC and QB2, use "pmemd.MPI" to run Amber. Below is a sample script which runs Amber with 2 nodes (40 CPU cores):
#!/bin/bash #PBS -A my_allocation #PBS -q checkpt #PBS -l nodes=2:ppn=20 #PBS -l walltime=HH:MM:SS #PBS -j oe #PBS -N JOB_NAME #PBS -V cd $PBS_O_WORKDIR mpirun -np 40 $AMBERHOME/bin/pmemd.MPI -O -i mdin.CPU -o mdout -p prmtop -c inpcrd
Note: the usual executable name used for Amber 16 GPU acceleration is pmemd.cuda (serial) or pmemd.cuda.MPI (parallel).
pmemd.cuda and pmemd.cuda.MPI in Amber 16 was built with Intel 15.0.0 compiler and CUDA 7.5, both of which are required for the Amber GPU building. Please load Intel 15.0.0 compiler and CUDA 7.5 into your user environment in order to run pmemd.cuda.
Only pmemd.cuda is recommended for GPU acceleration on SuperMIC, as each comupte node on SuperMIC only has 1 GPU, and please do not attempt to run regular GPU MD runs across multiple nodes. Infiniband is way too slow these days to keep up with the computation speed of the GPUs.
Using hybrid queue is required if running on SuperMIC.
On SuperMIC and QB2, use "pmemd.cuda" to run Amber 16 with GPU acceleration in serial. Below is a sample script which runs Amber 16 on 1 node:
#!/bin/bash #PBS -A my_allocation #PBS -q hybrid #PBS -l nodes=1:ppn=20 #PBS -l walltime=HH:MM:SS #PBS -j oe #PBS -N JOB_NAME #PBS -V cd $PBS_O_WORKDIR $AMBERHOME/bin/pmemd.cuda -O -i mdin.GPU -o mdout_gpu -p prmtop -c inpcrd
GPU acceleration must use hybrid node if using SuperMIC. Note pmemd.cuda is a serial program, so no parallel exe such as mpirun is required, or set mpirun -np 1
On QB2 , as each compute node has two GPUs, "pmemd.cuda.MPI" can be used to run Amber 16 with GPU acceleration in parallel. Below is a sample script which runs Amber 16 on 1 node (2 GPUs) on QB2:
#!/bin/bash #PBS -A my_allocation #PBS -q hybrid #PBS -l nodes=1:ppn=20 #PBS -l walltime=HH:MM:SS #PBS -j oe #PBS -N JOB_NAME #PBS -V cd $PBS_O_WORKDIR mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i mdin.GPU -o mdout_2gpu -p prmtop -c inpcrd -ref inpcrd
Use -np # where # is the number of GPUs you are requesting, NOT the number of CPUs. Note pmemd.cuda.MPI is significantly faster than pmemd.cuda only for the production run of a large model.
- The Amber Home Page has a variety of on-line resources available, including manuals and tutorials.
Last modified: September 18 2017 11:59:33.