OpenFOAM

OpenFOAM is a suite of computational fluid dynamics applications. It contains myriad solvers, both compressible and incompressible, as well as many utilities and libraries.

Availability and Restrictions

Versions

The following versions of OpenFOAM are available on OSC clusters:

version oakley Ruby Owens
2.3.0 X X  
2.4.0 X X  
3.0.0 X X  
4.1     X

 

The location of OpenFOAM may be dependent on the compiler/MPI software stack, in that case, you should use both of the following commands (adjusting the version number) to learn how to load the appropriate modules:

module spider openfoam
module spider openfoam/3.0.0

Feel free to contact OSC Help if you need other versions for your work.

Access 

OpenFOAM is available to all OSC users without restriction.

Basic Structure for an OpenFOAM Case

The basic directory structure for an OpenFOAM case is:

/home/yourusername/OpenFOAM_case
|-- 0
|-- U
|-- epsilon
|-- k
|-- p
`-- nut 
|-- constant
|-- RASProperties
|-- polyMesh
|   |-- blockMeshDict
|   `-- boundary
|-- transportProperties
`-- turbulenceProperties
|-- system
|-- controlDict
|-- fvSchemes
|-- fvSolution
`-- snappyHexMeshDict

IMPORTANT: To run in parallel, you need to also create the decomposeParDict file in the system directory. If you do not create this file, the decomposePar command will fail.

Usage

Usage on Oakley

Setup on Oakley

To configure the Oakley cluster for the use of OpenFOAM 1.7.x, use the following commands:

. /usr/local/OpenFOAM/OpenFOAM-1.7.x/etc/bashrc

To configure the Oakley cluster for the use of OpenFOAM 2.1.0, use the following commands:

. /usr/local/OpenFOAM/OpenFOAM-2.1.0/etc/bashrc

For OpenFOAM 2.3.0, use the following commands:

module load openfoam/2.3.0 # permissible with default Intel and MVAPICH2 compilers

For OpenFOAM 3.0.0, use the following commands:

module load openmpi/1.10 # currently only 3.0.0 is installed using OpenMPI libraries
module load openfoam/3.0.0

Batch Usage on Oakley

Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems.

On Oakley, refer to Queues and Reservations for Oakley and Scheduling Policies and Limits for more info. 

Interactive Batch Session

For an interactive batch session on Oakley, one can run the following command:

qsub -I -l nodes=1:ppn=12 -l walltime=1:00:00

which gives you 12 cores ( -l nodes=1:ppn=12 ) with 1 hour ( -l walltime=1:00:00 ). You may adjust the numbers per your need. 

Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor in a working directory on the system of your choice. Below is the example batch script ( job.txt ) for a serial run, note that this would use the default installation of OpenFOAM 2.3.0 on Oakley:

#PBS -N serial_OpenFOAM 
#PBS -l nodes=1:ppn=1 
#PBS -l walltime=24:00:00 
#PBS -j oe 
#PBS -S /bin/bash 
# Initialize OpenFOAM on Oakley Cluster
# This only works if you are using default modules
module load openfoam
# Move to the case directory, where the 0, constant and system directories reside
cd $PBS_O_WORKDIR
# Copy files to $TMPDIR and move there to execute the program
cp * $TMPDIR
cd $TMPDIR
# Mesh the geometry
blockMesh
# Run the solver
icoFoam
# Finally, copy files back to your home directory
cp * $PBS_O_WORKDIR

To run it via the batch system, submit the job.txt   file with the following command:

qsub job.txt
Non-interactive Batch Job (Parallel Run)

Below is the example batch script ( job.txt ) for a parallel run:

#PBS -N parallel_OpenFOAM
#PBS -l nodes=2:ppn=8
#PBS -l walltime=6:00:00
#PBS -j oe
#PBS -S /bin/bash 
# Initialize OpenFOAM on Oakley Cluster
# This only works if you are using default modules
module load openfoam
#Move to the case directory, where the 0, constant and system directories reside
cd $PBS_O_WORKDIR
#Mesh the geometry
blockMesh
#Decompose the mesh for parallel run
decomposePar
#Run the solver
mpiexec simpleFoam -parallel 
#Reconstruct the parallel results
reconstructPar

Usage on Ruby

Setup on Ruby

To configure the Ruby cluster for the use of OpenFOAM 2.3.0, use the following commands:
module load openfoam/2.3.0 # permissible with default Intel and MVAPICH2 compilers

For OpenFOAM 3.0.0, use the following commands:

module load openmpi/1.10 # currently only 3.0.0 is installed using OpenMPI libraries
module load openfoam/3.0.0

Batch Usage on Ruby

Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems.

On Ruby, refer to Queues and Reservations for Ruby and Scheduling Policies and Limits for more info. 

Interactive Batch Session

For an interactive batch session on Ruby, one can run the following command:

qsub -I -l nodes=1:ppn=20 -l walltime=1:00:00

which gives you 20 cores ( -l nodes=1:ppn=20 ) with 1 hour ( -l walltime=1:00:00 ). You may adjust the numbers per your need. 

Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script ( job.txt ) for a serial run:

#PBS -N serial_OpenFOAM 
#PBS -l nodes=1:ppn=1 
#PBS -l walltime=24:00:00 
#PBS -j oe 
#PBS -S /bin/bash 
# Initialize OpenFOAM on Ruby Cluster
# This only works if you are using default modules
module load openfoam
# Move to the case directory, where the 0, constant and system directories reside
cd $PBS_O_WORKDIR
# Copy files to $TMPDIR and move there to execute the program
cp * $TMPDIR
cd $TMPDIR
# Mesh the geometry
blockMesh
# Run the solver
icoFoam
# Finally, copy files back to your home directory
cp * $PBS_O_WORKDIR

To run it via the batch system, submit the job.txt   file with the following command:

qsub job.txt
Non-interactive Batch Job (Parallel Run)

Below is the example batch script ( job.txt ) for a parallel run:

#PBS -N parallel_OpenFOAM
#PBS -l nodes=2:ppn=8
#PBS -l walltime=6:00:00
#PBS -j oe
#PBS -S /bin/bash 
# Initialize OpenFOAM on Ruby Cluster
# This only works if you are using default modules
module load openfoam
# Move to the case directory, where the 0, constant and system directories reside
cd $PBS_O_WORKDIR
# Mesh the geometry
blockMesh
# Decompose the mesh for parallel run
decomposePar
# Run the solver
mpiexec simpleFoam -parallel 
# Reconstruct the parallel results
reconstructPar

Usage on Owens

Setup on Owens

To configure the Owens cluster for the use of OpenFOAM 4.1, use the following commands:
module load openmpi/1.10-hpcx # currently only 4.1 is installed using OpenMPI libraries
module load openfoam/4.1

Batch Usage on Owens

Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems.

On Owens, refer to Queues and Reservations for Owens and Scheduling Policies and Limits for more info. 

Interactive Batch Session

For an interactive batch session on Owens, one can run the following command:

qsub -I -l nodes=1:ppn=28 -l walltime=1:00:00

which gives you 28 cores ( -l nodes=1:ppn=28 ) with 1 hour ( -l walltime=1:00:00 ). You may adjust the numbers per your need. 

Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script ( job.txt ) for a serial run:

#PBS -N serial_OpenFOAM 
#PBS -l nodes=1:ppn=1
#PBS -l walltime=24:00:00 
#PBS -j oe 
#PBS -S /bin/bash 
# Initialize OpenFOAM on Owens Cluster
module load openmpi/1.10-hpcx
module load openfoam
# Move to the case directory, where the 0, constant and system directories reside
cd $PBS_O_WORKDIR
# Copy files to $TMPDIR and move there to execute the program
cp * $TMPDIR
cd $TMPDIR
# Mesh the geometry
blockMesh
# Run the solver
icoFoam
# Finally, copy files back to your home directory
cp * $PBS_O_WORKDIR

To run it via the batch system, submit the job.txt   file with the following command:

qsub job.txt
Non-interactive Batch Job (Parallel Run)

Below is the example batch script ( job.txt ) for a parallel run:

#PBS -N parallel_OpenFOAM
#PBS -l nodes=2:ppn=28
#PBS -l walltime=6:00:00
#PBS -j oe
#PBS -S /bin/bash 
# Initialize OpenFOAM on Ruby Cluster
# This only works if you are using default modules
module load openmpi/1.10-hpcx 
module load openfoam/2.3.0 
# Move to the case directory, where the 0, constant and system directories reside
cd $PBS_O_WORKDIR
# Mesh the geometry
blockMesh
# Decompose the mesh for parallel run
decomposePar
# Run the solver
mpiexec simpleFoam -parallel 
# Reconstruct the parallel results
reconstructPar

Further Reading

The OpenFOAM home page

Service: 
Fields of Science: