On September 22nd, OSC switched to Slurm for job scheduling and resource management on the Pitzer Cluster, along with the deployment of the new Pitzer hardware. We are in the process of updating the example job scripts for each software. If a Slurm example is not yet available, please consult our general Slurm information page or contact OSC help.


TURBOMOLE is an ab initio computational chemistry program that implements various quantum chemistry algorithms. It is focused on efficiency, notably using the resolution of the identity (RI) approximation.

Availability and Restrictions


These versions are currently available (S means serial executables, O means OpenMP executables, and P means parallel MPI executables):

Version Owens Pitzer
7.1 SOP  
7.2.1 SOP*  
7.3   SOP*
* Current default version

You can use module spider turbomole to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of Turbomole for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction. 

Publisher/Vendor/Repository and License Type

COSMOlogic, Commerical


Usage on Owens and Pitzer

Set-up on Owens

To load the default version of Turbomole module on Owens, use module load turbomole for both serial and parallel programs. To select a particular software version, use module load turbomole/version. For example, use   module load turbomole/7.1 to load Turbomole version 7.1 for both serial and parallel programs on Owens. 

Using Turbomole on Owens

To execute a turbomole program:

module load turbomole
<turbomole command>

Batch Usage on Owens

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=28 -l walltime=00:20:00

which requests 28 cores (-l nodes=1:ppn=28), for a walltime of 20 minutes (-l walltime=00:20:00). You may adjust the numbers per your need.

Sample batch scripts and input files are available here:


Note for Slurm job script

Upon Slurm migration, the presets for parallel jobs are not compatiable with Slurm environment of Pitzer. Users must set up parallel environment explicitly to get correct TURBOMOLE binaries. 

To set up a MPI case, add the following to a job script:

export PATH=$TURBODIR/bin/`sysname`:$PATH

An example script:

#SBATCH --job-name="turbomole_mpi_job"
#SBATCH --nodes=2
#SBATCH --time=0:10:0

module load intel
module load turbomole/7.3

export PATH=$TURBODIR/bin/`sysname`:$PATH


To set up a  SMP (OpenMP) case, add the following to a job script:

export PATH=$TURBODIR/bin/`sysname`:$PATH

An example script to run a SMP job on an exclusive node:

#SBATCH --job-name="turbomole_smp_job"
#SBATCH --nodes=1
#SBATCH --exclusive
#SBATCH --time=0:10:0

module load intel
module load turbomole/7.3

export PATH=$TURBODIR/bin/`sysname`:$PATH


Further Reading