ORCA is an ab initio quantum chemistry program package that contains modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties. ORCA is developed in the research group of Frank Neese. Visit ORCA Forum for additional information.
Availability and Restrictions
Versions
ORCA is available on the OSC clusters. These are the versions currently available:
Version | Owens | Pitzer | Notes |
---|---|---|---|
4.0.1.2 | X | X | openmpi/2.1.6-hpcx |
4.1.0 | X | X | openmpi/3.1.4-hpcx |
4.1.1 | X | X | openmpi/3.1.4-hpcx |
4.1.2 | X | X | openmpi/3.1.4-hpcx |
4.2.1 | X* | X* | openmpi/3.1.6-hpcx |
5.0.0 | X | X | openmpi/4.1.2-hpcx |
5.0.2 | X | X | openmpi/4.1.2-hpcx |
5.0.3 | X | X | openmpi/4.1.2-hpcx |
5.0.4 | X | X | openmpi/4.1.2-hpcx |
You can use module spider orca
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Access
ORCA is available to OSC academic users; users need to sign up ORCA Forum. You will receive a registration confirmation email from the ORCA management. Please contact OSC Help with the confirmation email for access.
Publisher/Vendor/Repository and License Type
ORCA, Academic (Computer Center)
Usage
Usage on Owens
Set-up
ORCA usage is controlled via modules. Load one of the ORCA modulefiles at the command line, in your shell initialization script, or in your batch scripts. To load the default version of ORCA module, use module load orca
. To select a particular software version, use module load orca/version
. For example, use module load orca/4.1.0
to load ORCA version 4.1.0 on Owens.
module spider orca/{version}
.Batch Usage
When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive Batch Session
For an interactive batch session one can run the following command:
sinteractive -A <project-account> -N 1 -n 1 -t 00:20:00
which requests one core (-N 1 -n 1
), for a walltime of 20 minutes (-t 00:20:00
). You may adjust the numbers per your need.
Non-interactive Batch Job
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script for a parallel run:
#!/bin/bash #SBATCH --job-name orca_mpi_test #SBATCH --time=0:10:0 #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH --account#SBATCH --gres=pfsdir module reset module load openmpi/3.1.6-hpcx module load orca/4.2.1 module list cp h2o_b3lyp_mpi.inp $PFSDIR/h2o_b3lyp_mpi.inp cd $PFSDIR $ORCA/orca h2o_b3lyp_mpi.inp > h2o_b3lyp_mpi.out ls cp h2o_b3lyp_mpi.out $SLURM_SUBMIT_DIR/h2o_b3lyp_mpi.out
Usage on Pitzer
Set-up
ORCA usage is controlled via modules. Load one of the ORCA modulefiles at the command line, in your shell initialization script, or in your batch scripts. To load the default version of ORCA module, use module load orca
. To select a particular software version, use module load orca/version
. For example, use module load orca/4.1.0
to load ORCA version 4.1.0 on Pitzer.
module spider orca/{version}
.Batch Usage
When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.
Interactive Batch Session
For an interactive batch session one can run the following command:
sinteractive -A <project-account> -N 1 -n 1 -t 00:20:00
which requests one node with one core (-N 1 -n 1
), for a walltime of 20 minutes (-t 00:20:00
). You may adjust the numbers per your need.
Non-interactive Batch Job
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script for a parallel run:
#!/bin/bash #SBATCH --job-name orca_mpi_test #SBATCH --time=0:10:0 #SBATCH --nodes=2 --ntasks-per-node=28 #SBATCH --account#SBATCH --gres=pfsdir module reset module load openmpi/3.1.6-hpcx module load orca/4.2.1 module list cp h2o_b3lyp_mpi.inp $PFSDIR/h2o_b3lyp_mpi.inp cd $PFSDIR $ORCA/orca h2o_b3lyp_mpi.inp > h2o_b3lyp_mpi.out ls cp $PFSDIR/h2o_b3lyp_mpi.out $SLURM_SUBMIT_DIR/h2o_b3lyp_mpi.out
Known Issues
Resolution: Resolved (workaround)
Update: 4/27/2023
Version: At least through 5.0.4
The default CPU binding for ORCA jobs can fail sporadically. The failure is almost immediate and produces a cryptic error message, e.g.
$ORCA/orca h2o.in
.
.
.
Three workarounds are known. Invoke ORCA without CPU binding:
$ORCA/orca h2o.in "--bind-to none"
Use a non hpcx MPI module with ORCA:
module load openmpi/4.1.2-tcp orca/5.0.4 $ORCA/orca h2o.in
Use more SLURM ntasks relative to ORCA nprocs which does not prevent the failure but merely reduces it's likelyhood:
#SBATCH --ntasks=10 cat << EOF > h2o.in %pal nprocs 5 end . . . EOF $ORCA/orca h2o.in
Note that each workaround can have performance side effects, and the last workaround can have direct charging consequences. We recommend that users benchmark their jobs to gauge the most desirable approach.
Update: 10/24/2022
Version: 4.1.2, 4.2.1, 5.0.0 and above
If you have found your MPI job failed immediately, please remove all extra parameters for mpirun from the command line, e.g.
$ORCA/orca h2o_b3lyp_mpi.inp "--machinefile $PBS_NODEFILE" > h2o_b3lyp_mpi.out
to
$ORCA/orca h2o_b3lyp_mpi.inp > h2o_b3lyp_mpi.out
We found a bug from ORCA and OpenMPI with recent SLURM update, which causes a multi-node MPI job failed immediately. We realize that OpenMPI community does not keep up with SLURM so we decide to make a permanent change to replace mpirun
used in ORCA with srun
.
Version: 4.1.0
For a MPI job that request multiple nodes, the job can be run from a globally accessible working directory, e.g., home or scratch directories. It is useful if one needs more space for temporary files. However, ORCA 4.1.0 CANNOT run a job on our scratch filesystem. The issue has been reported on ORCA forum. This issue has been resolved in ORCA 4.1.2. In the examples listed, scratch storage was used (--gres=pfsdir
& $PFSDIR
).
Further Reading
User manual is available from the ORCA Forum
Job submission information is available from the Batch Submission Guide
Scratch Storage information is availiable from the Storage Documentation