LS-DYNA

LS-DYNA is a general purpose finite element code for simulating complex structural problems, specializing in nonlinear, transient dynamic problems using explicit integration. LS-DYNA is one of the codes developed at Livermore Software Technology Corporation (LSTC).

Availability and Restrictions

LS-DYNA is available on Ruby, Oakley, and Glenn Clusters for both serial (smp solver for single node jobs) and parallel (mpp solver for multipe node jobs) versions. The versions currently available at OSC are:

Version Oakley Ruby notes
971-R4.2.1 smp       
mpp       
971-R5 smp       
mpp       
971-R5.1.1 smp X   Default version on Oakley prior to 09/15/2015
mpp X   Default version on Oakley prior to 09/15/2015
971-R7.0.0 smp X X*  
mpp X X*  

971-R7.1.1

smp  X*    
mpp X*    

971-R9.0.1

smp  X    
mpp X    
*: Current default version

Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

OSC does not provide LS_DYNA license directly, however users with their own academic departmental license can use it on the OSC clusters.  Please contact OSC Help for further instruction.

Access for Commerical Users

Contact OSC Help for getting access to LS-DYNA if you are a commerical user.

Usage

Usage on Oakley

Set-up on Oakley

To view available modules installed on Oakley, use  module avail ls-dyna  for smp solvers, and use  module spider mpp  for mpp solvers. In the module name, '_s' indicates single precision and '_d' indicates double precision. For example, mpp-dyna/971_d_R7.0.0 is the mpp solver with double precision on Oakley. Use  module load name  to load LS-DYNA with a particular software version. For example, use  module load mpp-dyna/971_d_R7.0.0   to load LS-DYNA mpp solver version 7.0.0 with double precision on Oakley.

Batch Usage on Oakley

When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=12 -l walltime=00:20:00 
which requests one whole node with 12 cores ( -l nodes=1:ppn=12 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Please follow the steps below to use LS-DYNA via the batch system:

1) copy your input files ( explorer.k  in the example below) to your work directory at OSC

2) create a batch script, similar to the following file, saved as job.txt . It uses the smp solver for a serial job (nodes=1) on Oakley:

#PBS -N plate_test
#PBS -l walltime=5:00:00
#PBS -l nodes=1:ppn=12
#PBS -j oe
# The following lines set up the LSDYNA environment
module load ls-dyna/971_d_7.0.0
#
# Move to the directory where the input files are located
#
cd $PBS_O_WORKDIR
#
# Run LSDYNA (number of cpus > 1)
#
lsdyna I=explorer.k NCPU=12

 3) submit the script to the batch queue with the command:  qsub job.txt

 When the job is finished, all the result files will be found in the directory where you submitted your job ($PBS_O_WORKDIR). Alternatively, you can submit your job from the temporary directory ($TMPDIR), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. 

Non-interactive Batch Job (Parallel Run)
Please follow the steps below to use LS-DYNA via the batch system:

1) copy your input files ( explorer.k  in the example below) to your work directory at OSC

2) create a batch script, similar to the following file, saved as job.txt ). It uses the mmp solver for a parallel job (nodes>1) on Oakley:

#PBS -N plate_test
#PBS -l walltime=5:00:00
#PBS -l nodes=2:ppn=12
#PBS -j oe
# The following lines set up the LSDYNA environment
module swap mvapich2/1.7 intelmpi
module load mpp-dyna/971_d_R7.0.0
#
# Move to the directory where the input files are located
#
cd $PBS_O_WORKDIR
#
# Run LSDYNA (number of cpus > 1)
#
mpiexec mpp971 I=explorer.k NCPU=24

 3) submit the script to the batch queue with the command:  qsub job.txt

When the job is finished, all the result files will be found in the directory where you submitted your job ($PBS_O_WORKDIR). Alternatively, you can submit your job from the temporary directory ($TMPDIR), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. An example scrip should include the following lines:

...
cd $TMPDIR
cp $PBS_O_WORKDIR/explorer.k .
... #launch the solver and execute
pbsdcp -g '*' $PBS_O_WORKDIR

Usage on Ruby

Set-up on Ruby

To view available modules installed on Ruby, use  module avail ls-dyna  for smp solvers, and use  module spider mpp  for mpp solvers. In the module name, '_s' indicates single precision and '_d' indicates double precision. For example, mpp-dyna/971_d_R7.0.0 is the mpp solver with double precision on Ruby. Use  module load name  to load LS-DYNA with a particular software version. For example, use  module load mpp-dyna/971_d_R7.0.0   to load LS-DYNA mpp solver version 7.0.0 with double precision on Ruby.

Batch Usage on Ruby

When you log into ruby.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=20 -l walltime=00:20:00 

which requests one whole node with 20 cores ( -l nodes=1:ppn=20 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.

Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Please follow the steps below to use LS-DYNA via the batch system:

1) copy your input files ( explorer.k  in the example below) to your work directory at OSC

2) create a batch script, similar to the following file, saved as job.txt . It uses the smp solver for a serial job (nodes=1) on Ruby:

#PBS -N plate_test
#PBS -l walltime=5:00:00
#PBS -l nodes=1:ppn=20
#PBS -j oe
# The following lines set up the LSDYNA environment
module load ls-dyna/971_d_7.0.0
#
# Move to the directory where the input files are located
#
cd $PBS_O_WORKDIR
#
# Run LSDYNA (number of cpus > 1)
#
lsdyna I=explorer.k NCPU=20

 3) submit the script to the batch queue with the command:  qsub job.txt

 When the job is finished, all the result files will be found in the directory where you submitted your job ($PBS_O_WORKDIR). Alternatively, you can submit your job from the temporary directory ($TMPDIR), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. 

Non-interactive Batch Job (Parallel Run)

Please follow the steps below to use LS-DYNA via the batch system:

1) copy your input files ( explorer.k  in the example below) to your work directory at OSC

2) create a batch script, similar to the following file, saved as job.txt ). It uses the mmp solver for a parallel job (nodes>1) on Ruby:

#PBS -N plate_test
#PBS -l walltime=5:00:00
#PBS -l nodes=2:ppn=20
#PBS -j oe
# The following lines set up the LSDYNA environment
module swap mvapich2/1.7 intelmpi/5.0.1
module load mpp-dyna/971_d_R7.0.0
#
# Move to the directory where the input files are located
#
cd $PBS_O_WORKDIR
#
# Run LSDYNA (number of cpus > 1)
#
mpiexec mpp971 I=explorer.k NCPU=40

 3) submit the script to the batch queue with the command:  qsub job.txt

When the job is finished, all the result files will be found in the directory where you submitted your job ($PBS_O_WORKDIR). Alternatively, you can submit your job from the temporary directory ($TMPDIR), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. An example scrip should include the following lines:

...
cd $TMPDIR
cp $PBS_O_WORKDIR/explorer.k .
... #launch the solver and execute
pbsdcp -g '*' $PBS_O_WORKDIR

Further Reading

See Also

Supercomputer: 
Service: 

LS-PrePost

Introduction

LS-PrePost is an advanced pre and post-processor that is delivered free with LS-DYNA. The user interface is designed to be both efficient and intuitive. LS-PrePost runs on Windows, Linux, and Unix utilizing OpenGL graphics to achieve fast rendering and XY plotting. The latest builds can be downloaded from LSTC's FTP Site.

 

The prefered way of accessing LS-Prepost is through OnDemand's Glenn desktop application.  This gives you a preconfigured enviornment with GPU acceleration enabled.

Availability

LS-PrePost is available on both Glenn and Oakley clusters. A module was created for LS-Prepost 4.0 on Oakley, and can be loaded with 'module load lspp'. All other versions can be loaded with the corresponding dyna modules.

Version Glenn Oakley
v2.3 X  
v3.0   X
v3.2   X
v4.0   X

 

 

 

Usage

Running LS-PrePost on Oakley through OnDemand's Glenn Destkop

Below are instructions on running LS-PrePost on Oakley through Glenn's OnDemand desktop interface with GPU acceleration enabled.  To run LS-PrePost with a slower X-tunneling procedure, see the specified instructions below.

 

1) Log in to OnDemand with your HPC username and password.

2) Launch the Glenn Desktop from "Apps" menu. 

3) Open a Terminal window (Applications > Accessories > Terminal)

4) Type the following command to connect to oakley:

    ssh -X username@oakley.osc.edu       

          * Where "username" is your username.

5)  Once logged in to Oakley, submit an interactive job with the following command:

    qsub -X -I -l nodes=1:ppn=12:gpus=2:vis -l walltime=hh:mm:ss

          * pick a walltime that is close to the amount of time you will need to for working in the GUI application.

6) Once your job starts, make a note of the hostname for the compute node your job is running on.  You can find this information by typing the following command:

    hostname

7) Open another Terminal window, and type the following commands:

     module load VirtualGL
     vglconnect username@job-hostname

           * job-hostname is the information you found in step 6; your command might look something like this, for example:  

     vglconnect ahahn@n0656.ten.osc.edu

           You'll be asked a password to connect, which should be your HPC password.

8) Now, you should be connected to your running job's GPU node.  Run the following commands to launch LS-PrePost version 4.0:

     export LD_LIBRARY_PATH=/usr/local/MATLAB/R2013a/sys/opengl/lib/glnxa64
     /usr/local/lstc/ls-prepost/lsprepost4.0_centos6/lspp4

At startup LS-PrePost displays a graphical interface for model generation and results post-processing.

 

Running LS-PrePost on Oakley or Glenn with X-11 forwarding

The following procedure will result in a much slower and GUI interface, but may be useful if the above instructions are not working.  It can be done completely from the command line, with no need for logging into OnDemand.  You may need to edit your terminal settings to enabled x11 forwarding.

1) Login to Oakley or Glenn with X11 forwarding enabled

ssh -X username@oakley.osc.edu

or

ssh -X username@glenn.osc.edu

 

2) Submit a Interactive job

qsub -I -X -l nodes=1:ppn=12 -l walltime=hh:mm:ss

and wait for it to start. Use ppn=8 for Glenn.

 

3) Load the LS-Dyna module and start up LS-PrePost application

module load ls-dyna
lspp3

 

   For LS-Prepost 4.0 on Oakley, use the following commands instead:

module load lspp
lspp4

 

A x11 window should pop up.  If you get a error along the lines of:

Error: Unable to initialize gtk, is DISPLAY set properly?

 

Double check that you:

1) logged in with x11 forwarding enabled

2) have configured your x11 settings for your terminal

3) included the -X with the qsub command

4) have a x11 client running on your computer (Xming, Xorg, XQuarts, etc.).

Documentation

Documentation for LS-PrePost may be obtained at: http://www.lstc.com/lspp/

Supercomputer: 

User-Defined Material for LS-DYNA

This page describes how to specify user defined material to use within LS-DYNA.  The user-defined subroutines in LS-DYNA allow the program to be customized for particular applications.  In order to define user material, LS-DYNA must be recompiled.

Usage

The first step to running a simulation with user defined material is to build a new executable. The following is an example done with solver version mpp971_s_R7.1.1.

When you log into the Oakley system, load mpp971_s_R7.1.1 with the command:

module load mpp-dyna/R7.1.1

Next, copy the mpp971_s_R7.1.1 object files and Makefile to your current directory:

cp /usr/local/lstc/mpp-dyna/R7.1.1/usermat/* $PWD

Next, update the dyn21.f file with your user defined material model subroutine. Please see the LS-DYNA User's Manual (Keyword version) for details regarding the format and structure of this file.

Once your user defined model is setup correctly in dyn21.f, build the new mpp971 executable with the command:

make

To execute a multi processor (ppn > 1) run with your new executable, execute the following steps:

1) move your input file to a directory on Glenn (pipe.k in the example below)

2) copy your newly created mpp971 executable to this directory as well

3) create a batch script (lstc_umat.job) like the following:

#PBS -N LSDYNA_umat
#PBS -l walltime=1:00:00
#PBS -l nodes=2:ppn=8
#PBS -j oe
#PBS -S /bin/csh

# This is the template batch script for running a pre-compiled
# MPP 971 v7600 LS-DYNA.  
# Total number of processors is ( nodes x ppn )
#
# The following lines set up the LSDYNA environment
module load mpp-dyna/R7.1.1
#
# Move to the directory where the job was submitted from
# (i.e. PBS_O_WORKDIR = directory where you typed qsub)
#
cd $PBS_O_WORKDIR
#
# Run LSDYNA 
# NOTE: you have to put in your input file name
#
mpiexec mpp971 I=pipe.k NCPU=16

          4) Next, submit this job to the batch queue with the command:

       qsub lstc_umat.job

The output result files will be saved to the directory you ran the qsub command from (known as the $PBS_O_WORKDIR_

Documentation

On-line documentation is available on LSTC website.

See Also

 

 

Supercomputer: 
Service: