It is now possible to run Docker and Singularity containers on the Ruby, Owens and Pitzer clusters at OSC. Single-node jobs are currently supported, including GPU jobs; MPI jobs are planned for the future.
From the Docker website: "A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings."
This document will describe how to run Docker and Singularity containers on the Ruby, Owens and Pitzer. You can use containers from Docker Hub, Sylabs Cloud, Singularity Hub, or any other source. As examples we will use
ubuntu from Docker Hub.
If you want to create your own container or modify a container, you will have to do it on a system where you have root access. Or, you can use Singularity Hub to create a container from a recipe. But, You cannot build or update a container on any other OSC system.
The most up-to-date help on Singularity comes from the command itself.
singularity help exec
The Singularity web site has documentation including a User Guide and examples.
Setting up your environment for Singularity usage
No setup is required. You can use Singulairty directly on all clusters.
Accessing a container
A Singularity container is a single file with a
.imgas a single file extesnion when you pull out a container from a hub.
You can simply download ("pull") a container from a hub. Popular hubs are Docker Hub and Singularity Hub. You can go there and search if they have a container that meets your needs. Docker Hub has more containers and may be more up to date but supports a much wider community than just HPC. Singularity Hub is for HPC, but the number of available containers are fewer. Additionally there are domain and vendor repositories such as biocontainers and NVIDIA HPC containers that may have relevant containers.
Examples of “pulling” a container from the both hubs:
Docker Hub: Pull from the 7.2.0 branch of the gcc repository on Docker Hub. The 7.2.0 is called a tag.
singularity pull docker://gcc:7.2.0
Singularity Hub: Pull the
vsoch/singularity-hello-world container from the Singularity hub. Since no tag is specified it pulls from the master branch of the repository.
singularity pull shub://vsoch/hello-world
Example: Pull an Ubuntu container from Docker Hub.
singularity pull docker://ubuntu:18.04
Downloading containers from the hubs is not the only way to get one. You can, for example get a copy from your colleague's computer or directory. If you would like to create your own container you can start from the user guide below. If you have any questions, please contact OSC Help.
Running a container
There are four ways to run a container under Singularity.
You can do this either in a batch job or on a login node. (Don’t run on a login node if the container will be performing heavy computation, of course.)
We note that the operating system on Owens is Red Hat:
[owens-login01]$ cat /etc/os-release NAME="Red Hat Enterprise Linux Server" VERSION="7.5 (Maipo)" ID="rhel" [..more..]
In the examples below we will often check the operating system to show that we are really inside a container.
Run container like a native command
If you simply run the container image it will execute the container’s runscript.
Example: Run an Ubuntu shell. (See also the “shell” sub-command below.)
[owens-login01]$ ./ubuntu_18.04.sif [owens-login01]$ cat /etc/os-release NAME="Ubuntu" VERSION="18.04.1 LTS (Bionic Beaver)" ID=ubuntu [..more..] [owens-login01]$ exit exit [owens-login01]$
Note that this container returns you to your native OS after you run it.
[owens-login01]$ ./hello-world_latest.sif RaawwWWWWWRRRR!! [owens-login01]$
Use the “run” sub-command
The Singularity “run” sub-command does the same thing as running a container directly as described above. That is, it executes the container’s runscript.
Example: Run a container from a local file
[owens-login01]$ singularity run hello_world_latest.sif RaawwWWWWWRRRR!! [owens-login01]$
Example: Run a container from a hub without explicitly downloading it
[owens-login01]$ singularity run shub://vsoch/hello-world Progress |===================================| 100.0% RaawwWWWWWRRRR!! [owens-login01]$
Use the “exec” sub-command
The Singularity “exec” sub-command lets you execute an arbitrary command within your container instead of just the runscript.
Example: Find out what operating system the
vsoch/hello-world container uses
[owens-login01]$ singularity exec hello-world_latest.sif cat /etc/os-release NAME="Ubuntu" VERSION="14.04.5 LTS, Trusty Tahr" ID=ubuntu [..more..] [owens-login01]$
Use the “shell” sub-command
The Singularity “shell” sub-command invokes an interactive shell within a container.
Example: Examine the contents of the
Note: A container’s runscript is the file
/singularity within the container.
[owens-login01 singularity]$ singularity shell hello-world_latest.sif Singularity: Invoking an interactive shell within container... Singularity hello-world_latest.sif:~/singularity> ls / bin dev home lib64 media opt rawr.sh run singularity sys users var boot etc lib lost+found mnt proc root sbin srv tmp usr Singularity hello-world_latest.sif:~/singularity> cat /singularity #!/bin/sh exec /bin/bash /rawr.sh Singularity hello-world_latest.sif:~/singularity> cat /rawr.sh #!/bin/bash echo "RaawwWWWWWRRRR!!" Singularity hello-world_latest.sif:~/singularity> exit exit [owens-login01 singulairity]$
Example: Run an Ubuntu shell. Note the “Singularity” prompt within the shell.
[owens-login01 singularity]$ singularity shell ubuntu_18.04.sif Singularity ubuntu_18.04.sif:~/singularity> cat /singularity #!/bin/sh exec /bin/bash "$@"Singularity ubuntu_18.04.sif:~/singularity> exit exit [owens-login01 singularity]$
File system access
When you use a container you run within the container’s environment. The directories available to you by default from the host environment are
- your home directory
- working directory (directory you were in when you ran the container)
You can review our Available File Systems page for more details about our file system access policy.
If you run the container within a job you will have the usual access to the
$PFSDIR environment variable with adding node attribute "
pfsdir" in the job request (
nodes=XX:ppn=XX:pfsdir). You can access most of our file systems from a container without any special treatment, however, if you need
TMPDIR you must specify it on the command line as in this example:
SINGULARITYENV_TMPDIR=$TMPDIR singularity shell ubuntu_18.04.sif
GPU usage within a container
If you have a GPU-enabled container you can easily run it on Owens or Pitzer just by adding the
--nv flag to the singularity exec or run command. The example below comes from the exec command section of Singularity User Guide. It runs a TensorFlow example using a GPU on Owens. (Output has been omitted from the example for brevity.)
[owens-login01]$ qsub -I -l nodes=1:ppn=28:gpus=1
...[o0756]$ cd $PBS_O_WORKDIR
git clone https://github.com/tensorflow/models.git
singularity exec --nv docker://tensorflow/tensorflow:latest-gpu \ python ./models/tutorials/image/mnist/convolutional.py