OpenFOAM

OpenFOAM is a suite of computational fluid dynamics applications. It contains myriad solvers, both compressible and incompressible, as well as many utilities and libraries.

Availability and Restrictions

Versions

The following versions of OpenFOAM are available on OSC clusters:

Version Owens Pitzer

4.1

X

 

5.0 X X
7.0 X* X*
1906  

X

1912   X
2306 X X

The location of OpenFOAM may be dependent on the compiler/MPI software stack, in that case, you should use one or both of the following commands (adjusting the version number) to learn how to load the appropriate modules:

module spider openfoam
module spider openfoam/2306

Feel free to contact OSC Help if you need other versions for your work.

Access 

OpenFOAM is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

OpenFOAM Foundation, Open source

Basic Structure for an OpenFOAM Case

The basic directory structure for an OpenFOAM case is:

/home/yourusername/OpenFOAM_case
|-- 0
|-- U
|-- epsilon
|-- k
|-- p
`-- nut 
|-- constant
|-- RASProperties
|-- polyMesh
|   |-- blockMeshDict
|   `-- boundary
|-- transportProperties
`-- turbulenceProperties
|-- system
|-- controlDict
|-- fvSchemes
|-- fvSolution
`-- snappyHexMeshDict

IMPORTANT: To run in parallel, you need to also create the decomposeParDict file in the system directory. If you do not create this file, the decomposePar command will fail.

Usage

Usage on Owens

Setup on Owens

To configure the Owens cluster for the use of OpenFOAM 4.1, use the following commands:
module load openmpi/1.10-hpcx # currently only 4.1 is installed using OpenMPI libraries
module load openfoam/4.1

Batch Usage on Owens

Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems.

On Owens, refer to Queues and Reservations for Owens and Scheduling Policies and Limits for more info. 

Interactive Batch Session

For an interactive batch session on Owens, one can run the following command:

sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00

which gives you 28 cores (-N 1 -n 28) with 1 hour (-t 1:00:00). You may adjust the numbers per your need. 

Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script (job.txt) for a serial run:

#!/bin/bash
#SBATCH --job-name serial_OpenFOAM 
#SBATCH --nodes=1 --ntasks-per-node=1 
#SBATCH --time 24:00:00
#SBATCH --account <project-account>

# Initialize OpenFOAM on Owens Cluster
module load openmpi/1.10-hpcx
module load openfoam

# Copy files to $TMPDIR and move there to execute the program
cp * $TMPDIR
cd $TMPDIR
# Mesh the geometry
blockMesh
# Run the solver
icoFoam
# Finally, copy files back to your home directory
cp * $SLURM_SUBMIT_DIR

To run it via the batch system, submit the job.txt file with the following command:

sbatch job.txt
Non-interactive Batch Job (Parallel Run)

Below is the example batch script (job.txt) for a parallel run:

#!/bin/bash
#SBATCH --job-name parallel_OpenFOAM 
#SBATCH --nodes=2 --ntasks-per-node=28
#SBATCH --time=6:00:00
#SBATCH --account <project-account>

# Initialize OpenFOAM on Ruby Cluster
# This only works if you are using default modules
module load openmpi/1.10-hpcx 
module load openfoam/2.3.0 

# Mesh the geometry
blockMesh
# Decompose the mesh for parallel run
decomposePar
# Run the solver
mpiexec simpleFoam -parallel 
# Reconstruct the parallel results
reconstructPar

Usage on Pitzer

Setup on Pitzer

To configure the Pitzer cluster for the use of OpenFOAM 5.0, use the following commands:
module load openmpi/3.1.0-hpcx # currently only 5.0 is installed using OpenMPI libraries
module load openfoam/5.0

Batch Usage on Pitzer

Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems.

On Pitzer, refer to Queues and Reservations for Pitzer and Scheduling Policies and Limits for more info. 

Interactive Batch Session

For an interactive batch session on Owens, one can run the following command:

sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00

which gives you 1 node (-N 1), 40 cores (-n 40) with 1 hour (-t 1:00:00). You may adjust the numbers per your need. 

Non-interactive Batch Job (Serial Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script (job.txt) for a serial run:

#!/bin/bash
#SBATCH --job-name serial_OpenFOAM 
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH --time 24:00:00 
#SBATCH --account <project-account>

# Initialize OpenFOAM on Owens Cluster
module load openmpi/3.1.0-hpcx
module load openfoam

# Copy files to $TMPDIR and move there to execute the program
cp * $TMPDIR
cd $TMPDIR
# Mesh the geometry
blockMesh
# Run the solver
icoFoam
# Finally, copy files back to your home directory
cp * $SLURM_SUBMIT_DIR

To run it via the batch system, submit the job.txt file with the following command:

sbatch job.txt
Non-interactive Batch Job (Parallel Run)

Below is the example batch script (job.txt) for a parallel run:

#!/bin/bash
#SBATCH --job-name parallel_OpenFOAM
#SBATCH --nodes=2 --ntasks-per-node=40
#SBATCH --time=6:00:00
#SBATCH --account <project-account>

# Initialize OpenFOAM on Ruby Cluster
# This only works if you are using default modules
module load openmpi/3.1.0-hpcx 
module load openfoam/5.0

# Mesh the geometry
blockMesh
# Decompose the mesh for parallel run
decomposePar
# Run the solver
mpiexec simpleFoam -parallel 
# Reconstruct the parallel results
reconstructPar

Further Reading

Supercomputer: 
Service: 
Fields of Science: