Skip to main content

fluent

Versions and Availability

About the Software

Description Path: key[@id="fluent"]/whatis Not Found!

Usage

Set up your environment to run Fluent

To run ANSYS Fluent, you need to set up your environment properly, which entails two steps:

  1. Add the corresponding softenv key to your .soft file and resoft: you can use the "softenv -k ansys" command to find out what the keys are;
  2. Set the license preference: this can be down by running the anslic_admin command and then select "Set license preferences for User xxx" -> "v14.0" -> "Use Academic Licenses" in the pop-up window.

Sample script

 #!/bin/bash
 #PBS -A your_allocation
 #PBS -q checkpt
 #PBS -l nodes=1:ppn=16
 #PBS -l walltime=12:00:00
 #PBS -V
 #PBS -j oe
 #PBS -N fluent_test

    fluent -v3ddp -g -t16 << EOFluentInput > output.dat
     file/read-case Innerwall.cas
     parallel/partition/auto/use-case-file-method yes
     parallel/partition/print
     solve/initialize/initialize-flow
     solve/iterate 50
     file/write-data Innerwall_2.dat
     file/auto-save/data-10
     file/auto-save/overwrite-existing-files
     file/confirm-overwrite y
     exit
     yes
     EOFluentInput 
 

The script is then submitted using qsub:

$ qsub job_script

where job_script is the name you gave the script file.

For a list of available command line options, use

$ fluent -help

▶ QSub FAQ?

Portable Batch System: qsub

qsub

All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.

Usage
$ qsub job_script

Where job_script is the name of the file containing the script.

PBS Directives

PBS directives take the form:

#PBS -X value

Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.

Example Job Script
 #!/bin/bash
 #
 # Use "workq" as the job queue, and specify the allocation code.
 #
 #PBS -q workq
 #PBS -A your_allocation_code
 # 
 # Assuming you want to run 16 processes, and each node supports 4 processes, 
 # you need to ask for a total of 4 nodes. The number of processes per node 
 # will vary from machine to machine, so double-check that your have the right 
 # values before submitting the job.
 #
 #PBS -l nodes=4:ppn=4
 # 
 # Set the maximum wall-clock time. In this case, 10 minutes.
 #
 #PBS -l walltime=00:10:00
 # 
 # Specify the name of a file which will receive all standard output,
 # and merge standard error with standard output.
 #
 #PBS -o /scratch/myName/parallel/output
 #PBS -j oe
 # 
 # Give the job a name so it can be easily tracked with qstat.
 #
 #PBS -N MyParJob
 #
 # That is it for PBS instructions. The rest of the file is a shell script.
 # 
 # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS:
 #
 #   1. Copy the necessary files from your home directory to your scratch directory.
 #   2. Execute in your scratch directory.
 #   3. Copy any necessary files back to your home directory.

 # Let's mark the time things get started.

 date

 # Set some handy environment variables.

 export HOME_DIR=/home/$USER/parallel
 export WORK_DIR=/scratch/myName/parallel
 
 # Set a variable that will be used to tell MPI how many processes will be run.
 # This makes sure MPI gets the same information provided to PBS above.

 export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`

 # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro".

 cp $HOME_DIR/hydro $WORK_DIR
 cd $WORK_DIR
 mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro

 # Mark the time processing ends.

 date
 
 # And we're out'a here!

 exit 0

Resources

Last modified: September 10 2020 11:58:50.