autodock
Table of Content
Versions and Availability
▶ Display Module Names for autodock on all clusters.
Machine | Version | Module |
---|---|---|
None Available | N/A | N/A |
▶ Module FAQ?
The information here is applicable to LSU HPC and LONI systems.
h4
Shells
A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.
/bin/bash
System resource file: /etc/profile
When one access the shell, the following user files are read in if they exist (in order):
- ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
- ~/.bashrc (interactive login only)
- ~/.profile
When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.
The default value of the environmental variable, PATH, is set automatically using Modules. See below for more information.
/bin/tcsh
The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.
Modules
Modules is a utility which helps users manage the complex business of setting up their shell environment in the face of potentially conflicting application versions and libraries.
Default Setup
When a user logs in, the system looks for a file named .modules in their home directory. This file contains module commands to set up the initial shell environment.
Viewing Available Modules
The command
$ module avail
displays a list of all the modules available. The list will look something like:
--- some stuff deleted --- velvet/1.2.10/INTEL-14.0.2 vmatch/2.2.2 ---------------- /usr/local/packages/Modules/modulefiles/admin ----------------- EasyBuild/1.11.1 GCC/4.9.0 INTEL-140-MPICH/3.1.1 EasyBuild/1.13.0 INTEL/14.0.2 INTEL-140-MVAPICH2/2.0 --- some stuff deleted ---
The module names take the form appname/version/compiler, providing the application name, the version, and information about how it was compiled (if needed).
Managing Modules
Besides avail, there are other basic module commands to use for manipulating the environment. These include:
add/load mod1 mod2 ... modn . . . Add modules rm/unload mod1 mod2 ... modn . . Remove modules switch/swap mod . . . . . . . . . Switch or swap one module for another display/show . . . . . . . . . . List modules loaded in the environment avail . . . . . . . . . . . . . . List available module names whatis mod1 mod2 ... modn . . . . Describe listed modules
The -h option to module will list all available commands.
▶ Did not find the version you want to use??
If a software package you would like to use for your research is not available on a cluster, you can request it to be installed. The software requests are evaluated by the HPC staff on a case-by-case basis. Before you send in a software request, please go through the information below.
h3
Types of request
Depending on how many users need to use the software, software requests are divided into three types, each of which corresponds to the location where the software is installed:
- The user's home directory
- Software packages installed here will be accessible only to the user.
- It is suitable for software packages that will be used by a single user.
- Python, Perl and R modules should be installed here.
- /project
- Software packages installed in /project can be accessed by a group of users.
- It is suitable for software packages that
- Need to be shared by users from the same research group, or
- are bigger than the quota on the home file syste.
- This type of request must be sent by the PI of the research group, who may be asked to apply for a storage allocation.
- /usr/local/packages
- Software packages installed under /usr/local/packages can be accessed by all users.
- It is suitable for software packages that will be used by users from multiple research groups.
- This type of request must be sent by the PI of a research group.
h3
How to request
Please send an email to sys-help@loni.org with the following information:
- Your user name
- The name of cluster where you want to use the requested software
- The name, version and download link of the software
- Specific installation instructions if any (e.g. compiler flags, variants and flavor, etc.)
- Why the software is needed
- Where the software should be installed (locally, /project, or /usr/local/packages) and justification explaining how many users are expected.
Please note that, once the software is installed, testing and validation are users' responsibility.
About the Software
AutoDock is a suite of automated docking tools. It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.
- Homepage: http://autodock.scripps.edu/
Usage
Please be aware that AutoDock and AutoGrid are serial (non-parallel) codes. It should be run
in a single
queue which will use one processor core. Running in any other queue
will cause cores to be idle, but the job will be charged for all cores.
AutoGrid is required to be executed prior to Autodock
usage: AutoGrid4 -p parameter_filename -l log_filename -d (increment debug level) -h (display this message) --version (Print audogrid version) usage: AutoDock4 -p parameter_filename -l log_filename -k (Keep original residue numbers) -i (Ignore header-checking) -t (Parse PDBQT file for torsions,then stop.) -d (Increment debug level) -C (Print copyright notice) --version (Print autodock version) --help (Display this message)
To successfully run a autodock simulation, first run autogrid simulation to generate a map around receptor atoms around which potentials are to be computed.
▶ Open Example?
#!/bin/bash #PBS -A your_allocation #PBS -q single # Note: a single queue is not present on Queen Bee, # use workq or checkpt, but you will be charged for all cores. #PBS -M your_email # Change ppn to match cluster (4, 8 or 16) if no single queue. #PBS -l nodes=1:ppn=1 #PBS -l walltime=06:00:00 #PBS -V #PBS -o AutoGrid_test.out #PBS -e AutoGrid_test.err #PBS -N autogridtest export EXEC=autogrid4 export INPUT=hsg1.gpf export OUTPUT=hsg1.glg export WORK_DIR=$PBS_O_WORKDIR cd $WORK_DIR $EXEC -p $INPUT -l $OUTPUT
Submit your script using qsub
.
▶ QSub FAQ?
Portable Batch System: qsub
qsub
All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.
Usage
$ qsub job_script
Where job_script is the name of the file containing the script.
PBS Directives
PBS directives take the form:
#PBS -X value
Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.
Example Job Script
#!/bin/bash # # Use "workq" as the job queue, and specify the allocation code. # #PBS -q workq #PBS -A your_allocation_code # # Assuming you want to run 16 processes, and each node supports 4 processes, # you need to ask for a total of 4 nodes. The number of processes per node # will vary from machine to machine, so double-check that your have the right # values before submitting the job. # #PBS -l nodes=4:ppn=4 # # Set the maximum wall-clock time. In this case, 10 minutes. # #PBS -l walltime=00:10:00 # # Specify the name of a file which will receive all standard output, # and merge standard error with standard output. # #PBS -o /scratch/myName/parallel/output #PBS -j oe # # Give the job a name so it can be easily tracked with qstat. # #PBS -N MyParJob # # That is it for PBS instructions. The rest of the file is a shell script. # # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS: # # 1. Copy the necessary files from your home directory to your scratch directory. # 2. Execute in your scratch directory. # 3. Copy any necessary files back to your home directory. # Let's mark the time things get started. date # Set some handy environment variables. export HOME_DIR=/home/$USER/parallel export WORK_DIR=/scratch/myName/parallel # Set a variable that will be used to tell MPI how many processes will be run. # This makes sure MPI gets the same information provided to PBS above. export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'` # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro". cp $HOME_DIR/hydro $WORK_DIR cd $WORK_DIR mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro # Mark the time processing ends. date # And we're out'a here! exit 0
If you have successfully run the autogrid simulation to completion, you can run the autodock simulation
▶ Open Example?
#!/bin/bash #PBS -A your_allocation #PBS -q single # Note: a single queue is not present on Queen Bee, # use workq or checkpt, but you will be charged for all cores. #PBS -M your_email # Change ppn to match cluster (4, 8 or 16) if no single queue. #PBS -l nodes=1:ppn=1 #PBS -l walltime=06:00:00 #PBS -V #PBS -o AutoDock_test.out #PBS -e AutoDock_test.err #PBS -N autogridtest export EXEC=autodock4 export INPUT=ind.dpf export OUTPUT=ind.dlg export WORK_DIR=$PBS_O_WORKDIR cd $WORK_DIR $EXEC -p $INPUT -l $OUTPUT
Submit your script using qsub
.
Note:To run a autodock calculations successfully, your autogrid job must complete without errors. If you are comfortable with submit scripts, you can use the following script to submit both the autogrid and autodock job using PBS Job Chains and Dependencies
▶ PBS Job Chains and Dependencies FAQ?
PBS Job Chains
Quite often, a single simulation requires multiple long runs which must be processed in sequence. One method for creating a sequence of batch jobs is to execute the qsub to submit its successor. We strongly discourage recursive, or "self-submitting," scripts since for some jobs, chaining isn't an option. When your job hits the time limit, the batch system kills them and the command to submit a subsequent job is not processed.
PBS allows users to move the logic for chaining from the script and into the scheduler. This is done with a command line option:
$ qsub -W depend=afterok:<jobid> <job_script>
This tells the job scheduler that the script being submitted should not start until jobid completes successfully. The following conditions are supported:
- afterok:<jobid>
- Job is scheduled if the job <jobid> exits without errors or is successfully completed.
- afternotok:<jobid>
- Job is scheduled if job <jobid> exited with errors.
- afterany:<jobid>
- Job is scheduled if the job <jobid> exits with or without errors.
One method to simplify this process is to write multiple batch scripts, job1.pbs, job2.pbs, job3.pbs etc and submit them using the following script:
#!/bin/bash FIRST=$(qsub job1.pbs) echo $FIRST SECOND=$(qsub -W depend=afterany:$FIRST job2.pbs) echo $SECOND THIRD=$(qsub -W depend=afterany:$SECOND job3.pbs) echo $THIRD
Modify the script according to number of job chained jobs required. The Job <$FIRST> will be placed in queue while the jobs <$SECOND> and <$THIRD> will be placed in queue with the "Not Queued" (NQ) flag in Batch Hold. When <$FIRST> is completed, the NQ flag will be replaced with the "Queued" (Q) flag and will be moved to the active queue.
A few words of caution: If you list the dependency as "afterok"/"afternotok" and your job exits with/without errors then your subsequent jobs will be killed due to "dependency not met".
Resources
- The AutoDock home page has a variety of resources that may be of interest.
Last modified: September 10 2020 11:58:50.