ANSYS

ANSYS offers a comprehensive software suite that spans the entire range of physics, providing access to virtually any field of engineering simulation that a design process requires. Supports are provided by ANSYS, Inc

Availability and Restrictions

OSC has academic license of structural-fluid dynamics academic products, which offer structural mechanics, explicit dynamics, fluid dynamics and thermal simulation capabilities. These bundles also include ANSYS Workbench, relevant CAD import tools, solid modeling and meshing, and High Performance Computing (HPC) capability. See "Academic Research -> ANSYS Academic Research Mechanical and CFD" in this table for all available products at OSC.

Access for Academic Users

OSC has an "Academic Research " license for ANSYS. This allows for academic use of the software by Ohio faculty and students, with some restrictions. To view current ANSYS node restrictions, please see ANSYS's Terms of Use.

Use of ANSYS products at OSC for academic purposes requires validation. Please contact OSC Help for further instruction.

Access for Commerical Users

Contact OSC Help for getting access to ANSYS if you are a commerical user.

Usage

For more information on how to use each ANSYS product at OSC systems, refer to its documentation page provided at the end of this page.

Note

Due to the way our Fluent and ANSYS modules are configured, simultaneously loading multiple of either module will cause a cryptic error. The most common case of this happening is when multiple of a users jobs are started at the same time and all load the module at once. In order for this error to manifest, the modules have to be loaded at precisely the same time; a rare occurrence, but a probable occurrence over the long term.

If you encounter this error you are not at fault. Please resubmit the failed job(s).

If you frequently submit large amounts of Fluent or ANSYS jobs, we recommend you stagger your job submit times to lower the chances of two jobs starting at the same time, and hence loading the module at the same time. Another solution is to establish job dependencies between jobs, so jobs will only start one after another. To do this, you would add the PBS directive:

#PBS -W after:jobid

To jobs you want to only start after another job has started. You would replace jobid with the job ID of the job to wait for. If you have additional questions, please contact OSC Help.

Further Reading

See Also

Supercomputer: 
Service: 

ANSYS Mechanical

ANSYS Mechanical is a finite element analysis (FEA) tool that enables you to analyze complex product architectures and solve difficult mechanical problems. You can use ANSYS Mechanical to simulate real world behavior of components and sub-systems, and customize it to test design variations quickly and accurately.

Availability and Restrictions

ANSYS Mechanical is available on the Oakley Cluster. The versions currently available at OSC are:

Version Oakley Notes
14.0 X  
14.5.7 X Default verion on Oakley prior to 09/15/2015
16.0 X*  
*: Current default version

You can use module avail ansys to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Access for Commerical Users

Contact OSC Help for getting access to ANSYS if you are a commerical user.

Usage

Usage on Oakley

Set-up on Oakley

To load the default version of ANSYS module, use  module load ansys . To select a particular software version, use   module load ansys/version . For example, use  module load ansys/13  to load ANSYS version 13.0 on Oakley. 

Using ANSYS Mechanical

Following a successful loading of the ANSYS module, you can access the ANSYS Mechanical commands and utility programs located in your execution path:

ansys <switch options> <file>

The ANSYS Mechanical command takes a number of Unix-style switches and parameters.

The -j Switch

The command accepts a -j switch. It specifies the "job id," which determines the naming of output files. The default is the name of the input file.

The -d Switch

The command accepts a -d switch. It specifies the device type. The value can be X11, x11, X11C, x11c, or 3D.

The -m Switch

The command accepts a -m switch. It specifies the amount of working storage obtained from the system. The units are megawords.

The memory requirement for the entire execution will be approximately 5300000 words more than the -m specification. This is calculated for you if you use ansnqs to construct an NQS request.

The -b [nolist] Switch

The command accepts a -b switch. It specifies that no user input is expected (batch execution).

The -s [noread] Switch

The command accepts a -s switch. By default, the start-up file is read during an interactive session and not read during batch execution. These defaults may be changed with the -s command line argument. The noread option of the -s argument specifies that the start-up file is not to be read, even during an interactive session. Conversely, the -s argument with the -b batch argument forces the reading of the start-up file during batch execution.

The -g [off] Switch

The command accepts a -g switch. It specifies that the ANSYS graphical user interface started automatically.

ANSYS Mechanical parameters

ANSYS Mechanical parameters may be assigned values on the command. The parameter must be at least two characters long and must be a legal parameter name. The ANSYS Mechanical parameter that is to be assigned a value should be given on the command line with a preceding dash (-), a space immediately after, and the value immediately after the space:

module load ansys
ansys -pval1 -10.2 -EEE .1e6
sets pval1 to -10.2 and EEE to 100000

Batch Usage on Oakley

When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your ANSYS Mechanical analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

Interactive mode is similar to running ANSYS Mechanical on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run ANSYS Mechanical interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.

To run interactive ANSYS Mechanical, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. For example, the following command requests one whole node with 12 cores ( -l nodes=1:ppn=12 ), for a walltime of 1 hour (  -l walltime=1:00:00 ), with one ANSYS license:

qsub -I -X -l nodes=1:ppn=12 -l walltime=1:00:00 -l software=ansys+1

You may adjust the numbers per your need. This job will queue until resources becomes available. Once the job is started, you're automatically logged in on the compute node; and you can launch ANSYS Mechanical and start the graphic interface with the following commands:

module load ansys
ansys -g
Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. For a given model, prepare the input file with ANSYS Mechanical commands (named  ansys.in  for example) for the batch run. Below is the example batch script (   job.txt ) for a serial run:

#PBS -N ansys_test  
#PBS -l walltime=30:00:00  
#PBS -l nodes=1:ppn=1
#PBS -l software=ansys+1  
#PBS -j oe
cd $TMPDIR  
cp $PBS_O_WORKDIR/ansys.in .    
module load ansys  
ansys < ansys.in   
cp <output files> $PBS_O_WORKDIR

In order to run it via the batch system, submit the  job.txt  file with the command:  qsub job.txt .

Non-interactive Batch Job (Parallel Run)

To take advantage of the powerful compute resources at OSC, you may choose to run distributed ANSYS Mechanical for large problems. Multiple nodes and cores can be requested to accelerate the solution time. Note that you'll need to change your batch script slightly for distributed runs.

Starting from September 15, 2015, a job using HPC tokens (with "ansyspar" flag) should be submitted to Oakley clusters due to scheduler issue.

For distributed ANSYS Mechanical jobs using one node (nodes=1), the number of processors needs to be specified in the command line with options '-dis -np':

#PBS -N ansys_test 
#PBS -l walltime=3:00:00 
#PBS -l nodes=1:ppn=12
#PBS -W x=GRES:ansys+1%ansyspar+8
...
ansys -dis -np 12 < ansys.in  
...

Notice that in the script above, the ansys parallel license is requested as well as ansys license in the format of

#PBS -W x=GRES:ansys+1%ansyspar+n

where n=m-4, with m being the total cpus called for this job. This line is necessary when the total cpus called is greater than 4 (m>4), which applies for the parallel example below as well.

For distributed jobs requesting multiple nodes, you need to specify the number of processors for each node in the command line. This information can be obtained from $PBS_NODEFILE. The following shows changes in the batch script if 2 nodes on Oakley are requested for a parallel ANSYS Mechanical job:

#PBS -N ansys_test 
#PBS -l walltime=3:00:00 
#PBS -l nodes=2:ppn=12
#PBS -W x=GRES:ansys+1%ansyspar+20
...
export MPI_WORKDIR=$PWD
machines=`uniq -c ${PBS_NODEFILE} | awk '{print $2 ":" $1}' | paste -s -d ':'`
ansys -dis -machines $machines < ansys.in  
...
pbsdcp -g '<output files>' $PBS_O_WORKDIR

The 'pbsdcp -g' command in the last line in the script above makes sure that all result files generated by different compute nodes are copied back to the work directory.

Further Reading

See Also

Supercomputer: 
Service: 

CFX

ANSYS CFX (called CFX hereafter) is a computational fluid dynamics (CFD) program for modeling fluid flow and heat transfer in a variety of applications.

Availability and Restrictions

CFX is available on the Oakley Cluster. The versions currently available at OSC are:

VERSION OAKLEY notes
14.5.7 X  
15.0.7 X*  
16.0 X  
*: Current default version

You can use module avail fluent  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Currently, there are in total 25 ANSYS CFD base license tokens and 68 HPC tokens for academic users. These base tokens are shared by available ANSYS CFD related projusts (see "Academic Research -> ANSYS Academic Research CFD" in this table for details). These HPC tokens are shared with all ANSYS products we have at OSC. A base license token will allow CFX to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial CFX job with 1 core will need 1 base license token while a parallel CFX job with 12 cores will need 1 base license token and 8 HPC tokens.

Access for Commerical Users

Contact OSC Help for getting access to CFX if you are a commerical user.

Usage

Usage on Oakley

Set-up on Oakley

To load the default version, use  module load fluent . To select a particular software version, use   module load fluent/version . For example, use  module load fluent/16.0  to load CFX version 16.0 on Oakley. 

Batch Usage on Oakley

When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

Interactive mode is similar to running CFX on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run CFX interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.

To run interactive CFX GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please follwoing the steps below to use CFX GUI interactivly:

  1. Ensure that your SSH client software has X11 forwarding enabled
  2. Connect to Oakley system
  3. Request an interactive job. The command below will request one whole node with 12 cores (  -l nodes=1:ppn=12 ), for a walltime of one hour ( -l walltime=1:00:00 ), with one ANSYS CFD license (modify as per your own needs):
    qsub -I -X -l nodes=1:ppn=12 -l walltime=1:00:00 -l software=fluent+1
    
  4. Once the interactive job has started, run the following commands to setup and start the CFX GUI:

    module load fluent
    cfx5 
    
Non-interactive Batch Job (Serial Run Using 1 Base Token)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

Below is the example batch script (  job.txt ) for a serial run with an input file test.def ) on Glenn:

#PBS -N serialjob_cfx
#PBS -l walltime=1:00:00
#PBS -l software=fluent+1
#PBS -l nodes=1:ppn=1
#PBS -j oe
#PBS -S /bin/bash
#Set up CFX environment.
module load fluent
#'cd' directly to your working directory
cd $PBS_O_WORKDIR
#Copy CFX files like .def to $TMPDIR and move there to execute the program
cp test.def $TMPDIR/
cd $TMPDIR
#Run CFX in serial with test.def as input file
cfx5solve -batch -def test.def 
#Finally, copy files back to your home directory
cp  * $PBS_O_WORKDIR

In order to run it via the batch system, submit the job.txt  file with the command: qsub job.txt  

Non-interactive Batch Job (Parallel Execution using HPC token)

CFX can be run in parallel, but it is very important that you read the documentation in the CFX Manual on the details of how this works.

In addition to requesting the base license token ( -l software=fluent+1 ), you need to request copies of the ansyspar license, i.e., HPC tokens. However the scheduler cannot handle two "software" flags simultaneously, so the syntax changes. The new option is  -W x=GRES:fluent+1%ansyspar+[n] , where [n] is equal to the number of cores you requested minus 4.

Parallel jobs have to be submitted on Oakley via the batch system. An example of the batch script follows:

#PBS -N paralleljob_cfx
#PBS -l walltime=10:00:00
#PBS -l nodes=2:ppn=12
#PBS -W x=GRES:fluent+1%ansyspar+20
#PBS -j oe
#PBS -S /bin/bash
#Set up CFX environment.
module load fluent
#'cd' directly to your working directory
cd $PBS_O_WORKDIR
#Copy CFX files like .def to $TMPDIR and move there to execute the program
cp test.def $TMPDIR/
cd $TMPDIR
#Convert PBS_NODEFILE information into format for CFX host list
nodes=`cat $PBS_NODEFILE`
nodes=`echo $nodes | sed -e 's/ /,/g'`
#Run CFX in parallel with new.def as input file
#if multiple nodes
cfx5solve -batch -def test.def  -par-dist $nodes -start-method "Platform MPI Distributed Parallel"
#if one node
#cfx5solve -batch -def test.def -par-dist $nodes -start-method "Platform MPI Local Parallel"
#Finally, copy files back to your home directory
cp  * $PBS_O_WORKDIR

Further Reading

Supercomputer: 
Service: 

FLUENT

ANSYS FLUENT (called FLUENT hereafter) is a state-of-the-art computer program for modeling fluid flow and heat transfer in complex geometries.

Availability and Restrictions

FLUENT is available on the Oakley Cluster. The versions currently available at OSC are:

Version Oakley NOTEs
14 X  
14.5.7 X  
15.0.7 X*  
16.0 X  
*: Current default version

You can use module avail fluent  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Currently, there are in total 25 ANSYS CFD base license tokens and 68 HPC tokens for academic users. These base tokens are shared by available ANSYS CFD related projusts (see "Academic Research -> ANSYS Academic Research CFD" in this table for details). These HPC tokens are shared with all ANSYS products we have at OSC. A base license token will allow FLUENT to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial FLUENT job with 1 core will need 1 base license token while a parallel FLUENT job with 12 cores will need 1 base license token and 8 HPC tokens.

Access for Commerical Users

Contact OSC Help for getting access to FLUENT if you are a commerical user.

Usage

Usage on Oakley

Set-up on Oakley

To load the default version of FLUENT module, use  module load fluent . To select a particular software version, use   module load fluent/version . For example, use  module load fluent/16.0  to load FLUENT version 16.0 on Oakley. 

Batch Usage on Oakley

When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your FLUENT analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

Interactive mode is similar to running FLUENT on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run FLUENT interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.

To run interactive FLUENT GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please follwoing the steps below to use FLUENT GUI interactivly:

  1. Ensure that your SSH client software has X11 forwarding enabled
  2. Connect to Oakley system
  3. Request an interactive job. The command below will request one whole node with 12 cores (  -l nodes=1:ppn=12 ), for a walltime of one hour ( -l walltime=1:00:00 ), with one FLUENT license (modify as per your own needs):
    qsub -I -X -l nodes=1:ppn=12 -l walltime=1:00:00 -l software=fluent+1
    
  4. Once the interactive job has started, run the following commands to setup and start the FLUENT GUI:

    module load fluent
    fluent 
    
Non-interactive Batch Job (Serial Run Using 1 Base Token)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

Below is the example batch script (  job.txt ) for a serial run with an input file run.input ) on Oakley:

#PBS -N serial_fluent
#PBS -l walltime=5:00:00 
#PBS -l nodes=1:ppn=1
#PBS -l software=fluent+1
#PBS -j oe
#
# The following lines set up the FLUENT environment
#
module load fluent
#
# Move to the directory where the job was submitted from
# You could also 'cd' directly to your working directory
cd $PBS_O_WORKDIR
#
# Copy files to $TMPDIR and move there to execute the program
#
cp test_input_file.cas test_input_file.dat run.input $TMPDIR
cd $TMPDIR
#
# Run fluent
fluent 3d -g < run.input  
#
# Where the file 'run.input' contains the commands you would normally
# type in at the Fluent command prompt.
# Finally, copy files back to your home directory
cp *   $PBS_O_WORKDIR  

As an example, your run.input file might contain:

file/read-case-date test_input_file.cas 
solve/iterate 100
file/write-case-data test_result.cas
file/confirm-overwrite yes    
exit  
yes  

In order to run it via the batch system, submit the job.txt  file with the command: qsub job.txt  

Non-interactive Batch Job (Parallel Execution using HPC token)

FLUENT can be run in parallel, but it is very important that you read the documentation in the FLUENT Manual on the details of how this works.

In addition to requesting the FLUENT base license token ( -l software=fluent+1 ), you need to request copies of the ansyspar license, i.e., HPC tokens. However the scheduler cannot handle two "software" flags simultaneously, so the syntax changes. The new option is  -W x=GRES:fluent+1%ansyspar+[n] , where [n] is equal to the number of cores you requested minus 4.

Parallel jobs have to be submitted on Oakley via the batch system. An example of the batch script follows:

#PBS -N parallel_fluent   
#PBS -l walltime=1:00:00   
#PBS -l nodes=2:ppn=12
#PBS -j oe
#PBS -W x=GRES:fluent+1%ansyspar+20
#PBS -S /bin/bash
set echo on   
hostname   
#   
# The following lines set up the FLUENT environment   
#   
module load fluent
#   
# Move to the directory where the job was submitted from and   
# create the config file for socket communication library   
#   
cd $PBS_O_WORKDIR   
#   
# Create list of nodes to launch job on   
rm -f pnodes   
cat  $PBS_NODEFILE | sort > pnodes   
export ncpus=`cat pnodes | wc -l`   
#   
#   Run fluent   
fluent 3d -t$ncpus -pinfiniband.ofed -cnf=pnodes -g < run.input 

Further Reading

See Also

Supercomputer: 
Service: 

Workbench Platform

ANSYS Workbench platform is the backbone for delivering a comprehensive and integrated simulation system to users. See ANSYS Workbench platform for more information. 

Availability and Restrictions

ANSYS Workbench is available on Oakley Cluster. The versions currently available at OSC are:

Version Oakley Notes
14.5.7 SF X  
CFD X
15.0.7 SF X  
CFD X*
16.0 SF X*  
CFD X
*: Current default version

Note:

  • SF: Structural-Fluid dynamics related applications. See "Academic Research -> ANSYS Academic Research Mechanical and CFD" in this table for all available products
  • CFD: CFD related applications. See "Academic Research -> ANSYS Academic Research CFD" in this table for all available products

You can use module avail ansys  to view available modules for a given machine if you want to use structural-fluid dynamics related applications or module avail fluent  to view available modules for a given machine if you want to use CFD related applications. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Access for Commerical Users

Contact OSC Help for getting access to ANSYS if you are a commerical user.

Usage

Usage on Oakley

Set-up for Structural-Fluid dynamics related applications

To load the default version , use  module load ansys . To select a particular software version, use   module load ansys/version . For example, use  module load ansys/15.0.7   to load version 15.0.7 on Oakley. After the module is loaded, use the following command to open Workbench GUI:

runwb2

Set-up for CFD related applications

To load the default version , use  module load fluent  . To select a particular software version, use   module load ansys/fluent  . For example, use  module load fluent/15.0.7   to load version 15.0.7 on Oakley. After the module is loaded, use the following command to open Workbench GUI:

runwb2

Further Reading

See Also

Supercomputer: 
Service: