Skip to main content

wien2k

Versions and Availability

About the Software

Description Path: key[@id="wien2k"]/whatis Not Found!

Usage

WIEN2k is normally run via a PBS batch job.

▶ Open Example?

Sample PBS Script

The example assumes a wien2k softenv key has been set using MVAPICH. The following script was provided by one of our users and should be used as a template.

#!/bin/tcsh
#PBS -A your_allocation
# specify the allocation. Change it to your allocation
#PBS -q checkpt
# the queue to be used. 
#PBS -l nodes=1:ppn=4
# Number of nodes and processors, On QB2, use ppn=20
#PBS -l walltime=1:00:00
# requested Wall-clock time.
#PBS -o wien2k_output
# name of the standard out file to be "g09_output".
#PBS -j oe
# standard error output merge to the standard output file.
#PBS -N wien2k_test
# name of the job (that will appear on executing the qstat command).
#
# cd to the directory with Your input file
cd $PBS_O_WORKDIR

#start creating .machines 
#modify the cut command according to the cluster you are running on
# the following example is for eric where the compute nodes
# have a 7 character hostname such as eric001
cat $PBS_NODEFILE >.machines_current
set aa=`wc .machines_current`
echo '#' > .machines

# example for an MPI parallel lapw0 
echo -n 'lapw0:' >> .machines
set i=1
while ($i < $aa[1] )
  echo -n `cat $PBS_NODEFILE |head -$i | tail -1` ' ' >>.machines
  @ i ++
end
echo  `cat $PBS_NODEFILE |head -$i|tail -1` ' ' >>.machines

#example for k-point parallel lapw1/2
set i=1
while ($i <= $aa[1] )
  echo -n '1:' >>.machines
  head -$i .machines_current |tail -1 >> .machines
  @ i ++
end
echo 'granularity:1' >>.machines
echo 'extrafine:1' >>.machines

#define here your WIEN2k command

 runsp_lapw -p -i 40 -cc 0.0001 -I

The script is submitted using qsub:

$ qsub script_name

▶ QSub FAQ?

Portable Batch System: qsub

qsub

All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.

Usage
$ qsub job_script

Where job_script is the name of the file containing the script.

PBS Directives

PBS directives take the form:

#PBS -X value

Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.

Example Job Script
 #!/bin/bash
 #
 # Use "workq" as the job queue, and specify the allocation code.
 #
 #PBS -q workq
 #PBS -A your_allocation_code
 # 
 # Assuming you want to run 16 processes, and each node supports 4 processes, 
 # you need to ask for a total of 4 nodes. The number of processes per node 
 # will vary from machine to machine, so double-check that your have the right 
 # values before submitting the job.
 #
 #PBS -l nodes=4:ppn=4
 # 
 # Set the maximum wall-clock time. In this case, 10 minutes.
 #
 #PBS -l walltime=00:10:00
 # 
 # Specify the name of a file which will receive all standard output,
 # and merge standard error with standard output.
 #
 #PBS -o /scratch/myName/parallel/output
 #PBS -j oe
 # 
 # Give the job a name so it can be easily tracked with qstat.
 #
 #PBS -N MyParJob
 #
 # That is it for PBS instructions. The rest of the file is a shell script.
 # 
 # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS:
 #
 #   1. Copy the necessary files from your home directory to your scratch directory.
 #   2. Execute in your scratch directory.
 #   3. Copy any necessary files back to your home directory.

 # Let's mark the time things get started.

 date

 # Set some handy environment variables.

 export HOME_DIR=/home/$USER/parallel
 export WORK_DIR=/scratch/myName/parallel
 
 # Set a variable that will be used to tell MPI how many processes will be run.
 # This makes sure MPI gets the same information provided to PBS above.

 export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`

 # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro".

 cp $HOME_DIR/hydro $WORK_DIR
 cd $WORK_DIR
 mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro

 # Mark the time processing ends.

 date
 
 # And we're out'a here!

 exit 0

Resources

Last modified: September 10 2020 11:58:50.