ANSYS CFX (called CFX hereafter) is a computational fluid dynamics (CFD) program for modeling fluid flow and heat transfer in a variety of applications.
CFX is available on the Oakley Cluster. The versions currently available at OSC are:
You can use
module spider fluent to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.
Currently, there are in total 25 ANSYS CFD base license tokens and 68 HPC tokens for academic users. These base tokens are shared by available ANSYS CFD related projusts (see "Academic Research -> ANSYS Academic Research CFD" in this table for details). These HPC tokens are shared with all ANSYS products we have at OSC. A base license token will allow CFX to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial CFX job with 1 core will need 1 base license token while a parallel CFX job with 12 cores will need 1 base license token and 8 HPC tokens.
Contact OSC Help for getting access to CFX if you are a commerical user.
module load fluent. To select a particular software version, use
module load fluent/version. For example, use
module load fluent/16.0to load CFX version 16.0 on Oakley.
When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive mode is similar to running CFX on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run CFX interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.
To run interactive CFX GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please follwoing the steps below to use CFX GUI interactivly:
-l nodes=1:ppn=12), for a walltime of one hour (
-l walltime=1:00:00), with one ANSYS CFD license (modify as per your own needs):
qsub -I -X -l nodes=1:ppn=12 -l walltime=1:00:00 -l software=fluent+1
Once the interactive job has started, run the following commands to setup and start the CFX GUI:
module load fluent cfx5
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.
Below is the example batch script (
job.txt ) for a serial run with an input file
test.def ) on Glenn:
#PBS -N serialjob_cfx #PBS -l walltime=1:00:00 #PBS -l software=fluent+1 #PBS -l nodes=1:ppn=1 #PBS -j oe #PBS -S /bin/bash #Set up CFX environment. module load fluent #'cd' directly to your working directory cd $PBS_O_WORKDIR #Copy CFX files like .def to $TMPDIR and move there to execute the program cp test.def $TMPDIR/ cd $TMPDIR #Run CFX in serial with test.def as input file cfx5solve -batch -def test.def #Finally, copy files back to your home directory cp * $PBS_O_WORKDIR
In order to run it via the batch system, submit the
job.txt file with the command:
CFX can be run in parallel, but it is very important that you read the documentation in the CFX Manual on the details of how this works.
In addition to requesting the base license token (
-l software=fluent+1 ), you need to request copies of the ansyspar license, i.e., HPC tokens. However the scheduler cannot handle two "software" flags simultaneously, so the syntax changes. The new option is
-W x=GRES:fluent+1%ansyspar+[n] , where [n] is equal to the number of cores you requested minus 4.
Parallel jobs have to be submitted on Oakley via the batch system. An example of the batch script follows:
#PBS -N paralleljob_cfx #PBS -l walltime=10:00:00 #PBS -l nodes=2:ppn=12 #PBS -W x=GRES:fluent+1%ansyspar+20 #PBS -j oe #PBS -S /bin/bash #Set up CFX environment. module load fluent #'cd' directly to your working directory cd $PBS_O_WORKDIR #Copy CFX files like .def to $TMPDIR and move there to execute the program cp test.def $TMPDIR/ cd $TMPDIR #Convert PBS_NODEFILE information into format for CFX host list nodes=`cat $PBS_NODEFILE` nodes=`echo $nodes | sed -e 's/ /,/g'` #Run CFX in parallel with new.def as input file #if multiple nodes cfx5solve -batch -def test.def -par-dist $nodes -start-method "Platform MPI Distributed Parallel" #if one node #cfx5solve -batch -def test.def -par-dist $nodes -start-method "Platform MPI Local Parallel" #Finally, copy files back to your home directory cp * $PBS_O_WORKDIR