HOWTO: Use Docker and Singularity Containers at OSC

It is now possible to run Docker and Singularity containers on the Owens cluster at OSC. Single-node jobs are currently supported, including GPU jobs; MPI jobs are planned for the future.

From the Docker website:  "A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings."

This document will describe how to run Docker and Singularity containers on Owens. You can use containers from Docker Hub, Singularity Hub, or any other source. As examples we will use vsoch/singularity-hello-world and ubuntu from Docker Hub.

If you want to create your own container or modify a container, you will have to do it on a system where you have root access. You cannot build or update a container on Owens or any other OSC system.

Getting help

The most up-to-date help on Singularity comes from the command itself.

singularity help
singularity help exec

The Singularity web site has documentation including a User Guide and examples.

Setting up your environment for Singularity usage

To load the Singularity environment on Owens, run the following command:

module load singularity

Accessing a container

A Singularity container is a single file with a .img or .simg extension.

You can either download (“pull”) a container from a hub or run it directly from the hub. Or just copy the container from anywhere.

Examples of “pulling” a container:

Example:  Pull from the 7.2.0 branch of the gcc repository on Docker Hub. The 7.2.0 is called a tag.

singularity pull docker://gcc:7.2.0

Filename:  gcc-7.2.0.img

Example:  Pull the vsoch/singularity-hello-world container from the Singularity hub. Since no tag is specified it pulls from the master branch of the repository.

singularity pull shub://vsoch/hello-world

Filename:  vsoch-hello-world-master.img

Example:  Pull an Ubuntu container from Docker Hub.

singularity pull docker://ubuntu

Filename:  ubuntu.img

Running a container

There are four ways to run a container under Singularity.

You can do all this either in a batch job or on a login node. (Don’t run on a login node if the container will be performing heavy computation, of course.)

We note that the operating system on Owens is Red Hat:

[owens-login01]$ cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.3 (Maipo)"
PRETTY_NAME="Red Hat Enterprise Linux"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"

In the examples below we will often check the operating system to show that we are really inside a container.

Run container like a native command

If you simply run the container image it will execute the container’s runscript.

Example:  Run an Ubuntu shell. (See also the “shell” sub-command below.)

[owens-login01]$ ./ubuntu.img
[owens-login01]$ cat /etc/os-release
VERSION="16.04.3 LTS (Xenial Xerus)"
PRETTY_NAME="Ubuntu 16.04.3 LTS"
[owens-login01]$ exit

Example:  Run vsoch/hello-world

Note that this container returns you to your native OS after you run it.

[owens-login01]$ ./vsoch-hello-world-master.img

Use the “run” sub-command

The Singularity “run” sub-command does the same thing as running a container directly as described above. That is, it executes the container’s runscript.

Example:  Run a container from a local file

[owens-login01]$ singularity run vsoch-hello-world-master.img

Example:  Run a container from a hub without explicitly downloading it

[owens-login01]$ singularity run shub://vsoch/hello-world
Progress |===================================| 100.0%

Use the “exec” sub-command

The Singularity “exec” sub-command lets you execute an arbitrary command within your container instead of just the runscript.

Example:  Find out what operating system the vsoch/hello-world container uses

[owens-login01]$ singularity exec vsoch-hello-world-master.img cat /etc/os-release
VERSION="14.04.5 LTS, Trusty Tahr"
PRETTY_NAME="Ubuntu 14.04.5 LTS"

Use the “shell” sub-command

The Singularity “shell” sub-command invokes an interactive shell within a container.

Example:  Examine the contents of the vsoch/hello-world container

Note:  A container’s runscript is the file /singularity within the container.

[owens-login01]$ singularity shell vsoch-hello-world-master.img
Singularity: Invoking an interactive shell within container...
Singularity vsoch-hello-world-master.img:~/singularity> ls /
bin   dev  home  lib64       media  opt  run   singularity  sys  users  var
boot  etc  lib   lost+found  mnt    proc  root     sbin  srv          tmp  usr
Singularity vsoch-hello-world-master.img:~/singularity> cat /singularity
exec /bin/bash /
Singularity vsoch-hello-world-master.img:~/singularity> cat /
echo "RaawwWWWWWRRRR!!"
Singularity vsoch-hello-world-master.img:~/singularity> exit

Example:  Run an Ubuntu shell. Note the “Singularity” prompt within the shell.

[owens-login01]$ singularity shell ubuntu.img
Singularity: Invoking an interactive shell within container...
Singularity ubuntu.img:~/singularity> cat /singularity
exec /bin/bash "$@"Singularity ubuntu.img:~/singularity> exit

File system access

When you use a container you run within the container’s environment.  The directories available to you by default from the host environment (Owens) are

  • your home directory
  • working directory (directory you were in when you ran the container)
  • /fs/project
  • /fs/scratch
  • /tmp

If you run the container within a job you will have the usual access to the $PFSDIR environment variable. In order to access $TMPDIR you must specify it on the command line as in this example:

SINGULARITYENV_TMPDIR=$TMPDIR singularity shell ubuntu.img
Note: Most environment variables are passed into the container by default, but for obscure reasons $TMPDIR is not. The above syntax passes it in explicitly.

GPU usage within a container

If you have a GPU-enabled container you can easily run it on Owens just by adding the --nv flag to the singularity exec or run command.  The example below comes from the exec command section of Singularity User Guide.  It runs a TensorFlow example using a GPU on Owens.  (Output has been omitted from the example for brevity.)

[owens-login01]$ qsub -I -l nodes=1:ppn=28:gpus=1
[o0756]$ cd $PBS_O_WORKDIR
[o0756]$ module use /usr/local/share/lmodfiles/project/osc
[o0756]$ module load singularity

 git clone


 singularity exec --nv docker://tensorflow/tensorflow:latest-gpu \
python ./models/tutorials/image/mnist/