piny
Table of Content
Versions and Availability
About the Software
Description Path: key[@id="piny"]/whatis Not Found!
Usage
Usage requires invoking the PINY_MD command and providing it with the name of an input file as an argument. Generically, it looks like:
$ piny_md_machine sim_input
To find the name assigned to the executable, use the soft-dbq to examine the softenv key used for PINY. It will give the path to executable. For instance:
$ soft-dbq +piny-md-intel-11.1-mvapich-1.1
This is all the information associated with
the key or macro +piny-md-intel-11.1-mvapich-1.1.
-------------------------------------------
Name: +piny-md-intel-11.1-mvapich-1.1
Description: @types: Applications @name: piny-md @version: Aug 30,
2005 @build: mvapich-1.1-intel-10.1 @about: PINY_MD(c) is a
multipurpose, object-oriented molecular simulation package developed
as a collaborative effort between Indiana University, New York
University and the University of Pennsylvania.
Flags: none
Groups: none
Exists on: Linux
-------------------------------------------
On the Linux architecture,
the following will be done to the environment:
The following environment changes will be made:
PATH = ${PATH}:/usr/local/packages/piny-md/intel-11.1-mvapich-1.1/bin
$ ls /usr/local/packages/piny-md/intel-11.1-mvapich-1.1/bin
piny_md_par
This shows the path to PINY_MD and the existance of a single executable name piny_md_par. The _par indicates it is a parallel build, and the key name shows it was built with mvapich 1.1.
Setting up a simulation is the hard part of using the program. For guidance there, please refer to the Resources below.
▶ QSub FAQ?
Portable Batch System: qsub
qsub
All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.
Usage
$ qsub job_script
Where job_script is the name of the file containing the script.
PBS Directives
PBS directives take the form:
#PBS -X value
Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.
Example Job Script
#!/bin/bash
#
# Use "workq" as the job queue, and specify the allocation code.
#
#PBS -q workq
#PBS -A your_allocation_code
#
# Assuming you want to run 16 processes, and each node supports 4 processes,
# you need to ask for a total of 4 nodes. The number of processes per node
# will vary from machine to machine, so double-check that your have the right
# values before submitting the job.
#
#PBS -l nodes=4:ppn=4
#
# Set the maximum wall-clock time. In this case, 10 minutes.
#
#PBS -l walltime=00:10:00
#
# Specify the name of a file which will receive all standard output,
# and merge standard error with standard output.
#
#PBS -o /scratch/myName/parallel/output
#PBS -j oe
#
# Give the job a name so it can be easily tracked with qstat.
#
#PBS -N MyParJob
#
# That is it for PBS instructions. The rest of the file is a shell script.
#
# PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS:
#
# 1. Copy the necessary files from your home directory to your scratch directory.
# 2. Execute in your scratch directory.
# 3. Copy any necessary files back to your home directory.
# Let's mark the time things get started.
date
# Set some handy environment variables.
export HOME_DIR=/home/$USER/parallel
export WORK_DIR=/scratch/myName/parallel
# Set a variable that will be used to tell MPI how many processes will be run.
# This makes sure MPI gets the same information provided to PBS above.
export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`
# Copy the files, jump to WORK_DIR, and execute! The program is named "hydro".
cp $HOME_DIR/hydro $WORK_DIR
cd $WORK_DIR
mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro
# Mark the time processing ends.
date
# And we're out'a here!
exit 0
Resources
Last modified: September 10 2020 11:58:50.