CFX

CFX is a computational fluid dynamics (CFD) program for modeling fluid flow and heat transfer in a variety of applications.

Availability & Restri<--break->ctions

CFX is available on both the Glenn and Oakley clusters.  The following versions are available:

VERSION GLENN OAKLEY
14.5 X  
15.0   X

Academic License Limitations

Currently, there are in total 25 ANSYS CFD base tokens and 68 HPC tokens for academic users. These base tokens are shared between FLUENT and CFX. These HPC tokens are shared with all ANSYS products we have at OSC.

A base license token will allow to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. A job using a base license token can be submitted to either Glenn or Oakley clusters. A parallel job using HPC tokens (with "ansyspar" flag) however can only be submitted to Glenn clusters due to scheduler issueFor instance, a serial CFX job with 1 core will need 1 base license token while a parallel CFX job with 8 cores will need 1 base license token and 4 HPC tokens. 

Commercial License Limitations

For commercial users, there are in total 20 base license tokens and 512 HPC tokens. The base license tokens are shared betweeen FLUENT and CFX. The HPC tokens are shared among available ANSYS products (FLUENT, CFX, ICEMCFD, ANSYS Mechanical, etc.)  

Usage

Access

Use of CFX requires validation. Please contact OSC Help for more information.

Set-up

CFX can only be run on the compute nodes of the Oakley and Glenn clusters. Therefore, all CFX jobs are run via the batch scheduling system, either as interactive or unattended jobs. In either case, only once a batch job has been started can the CFX module be loaded. For example, if you'd like to load CFX version 14.5 on Glenn, type:

module load fluent

cfx5

Batch Usage

Sample Usage (interactive execution)

Using the CFX GUI interactivly can be done with the following steps:

  1. Ensure that your SSH client software has X11 forwarding enabled
  2. Connect to either the Oakley or Glenn system
  3. Request an interactive job. The command below will request a one-core, one-hour job. Modify as per your own needs:
    qsub -I -X -l walltime=1:00:00 -l software=fluent+1
  4. Once the interactive job has started, run the following commands to setup and start the CFX GUI:

    module load fluent
    cfx5

Sample Batch Script (serial execution using 1 base token)

An example of running CFX job for one-hour with an input file named "test.def" on Glenn is provided below:

#PBS -N serialjob_cfx
#PBS -l walltime=1:00:00
#PBS -l software=fluent+1
#PBS -l nodes=1:ppn=1
#PBS -j oe
#PBS -S /bin/bash

#Set up CFX environment.
module load fluent

#'cd' directly to your working directory
cd $PBS_O_WORKDIR

#Copy CFX files like .def to $TMPDIR and move there to execute the program
cp test.def $TMPDIR/
cd $TMPDIR

#Run CFX in serial with test.def as input file
cfx5solve -batch -def test.def 

#Finally, copy files back to your home directory
cp  * $PBS_O_WORKDIR

Sample Batch Script (parallel execution using HPC token)

CFX can be run in parallel, but it is very important that you read the documentation in the CFX Manual on the details of how this works. You can find the CFX manuals on-line by following the "Further Reading" link at the bottom of this page.

In addition to requesting the CFX base license token (-l software=fluent+1), you need to request copies of the ansyspar license, i.e., HPC tokens. However the scheduler cannot handle two "software" flags simultaneously, so the syntax changes. The new option is -W x=GRES:fluent+1%ansyspar+[n], where [n] is equal to the number of cores you requested minus 4.

An example of the batch script follows:

#PBS -N paralleljob_cfx
#PBS -l walltime=10:00:00
#PBS -l nodes=2:ppn=8
#PBS -W x=GRES:fluent+1%ansyspar+12
#PBS -j oe
#PBS -S /bin/bash

#Set up CFX environment.
module load fluent

#'cd' directly to your working directory
cd $PBS_O_WORKDIR

#Copy CFX files like .def to $TMPDIR and move there to execute the program
cp test.def $TMPDIR/
cd $TMPDIR

#Convert PBS_NODEFILE information into format for CFX host list
nodes=`cat $PBS_NODEFILE`
nodes=`echo $nodes | sed -e 's/ /,/g'`

#Run CFX in parallel with new.def as input file
#if multiple nodes
cfx5solve -batch -def test.def  -par-dist $nodes -start-method "Platform MPI Distributed Parallel"
#if one node
#cfx5solve -batch -def test.def -par-dist $nodes -start-method "Platform MPI Local Parallel"

#Finally, copy files back to your home directory
cp  * $PBS_O_WORKDIR

Further Reading

Supercomputer: 
Service: