From the Docker website: "A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings."
This document will describe how to run Docker and Singularity containers on Owens. You can use containers from Docker Hub, Singularity Hub, or any other source. As examples we will use
ubuntu from Docker Hub.
If you want to create your own container or modify a container, you will have to do it on a system where you have root access. Or, you can use Singularity Hub to create a container from a recipe. But, You cannot build or update a container on Owens or any other OSC system.
The most up-to-date help on Singularity comes from the command itself.
singularity help exec
The Singularity web site has documentation including a User Guide and examples.
Setting up your environment for Singularity usage
To load the Singularity environment on Owens, run the following command:
module load singularity
Accessing a container
A Singularity container is a single file with a
You can simply download ("pull") a container from a hub. Popular hubs are Docker Hub and Singularity Hub. You can go there and search if they have a container that meets your needs. Docker Hub has more containers and may be more up to date but supports a much wider community than just HPC. Singularity Hub is for HPC, but the number of available containers are fewer. Additionally there are domain and vendor repositories such as biocontainers and NVIDIA HPC containers that may have relevant containers.
Examples of “pulling” a container from the both hubs:
Docker Hub: Pull from the 7.2.0 branch of the gcc repository on Docker Hub. The 7.2.0 is called a tag.
singularity pull docker://gcc:7.2.0
When you pull from docker, there will be a message:
WARNING: pull for Docker Hub is not guaranteed to produce the WARNING: same image on repeated pull. Use Singularity Registry WARNING: (shub://) to pull exactly equivalent images.
As we mentioned, containers from Singularity Hub are more likely to be compatible with our container environment. But, since Docker Hub has more updated versions, it is more likely to have a container that you are looking for.
Singularity Hub: Pull the
vsoch/singularity-hello-world container from the Singularity hub. Since no tag is specified it pulls from the master branch of the repository.
singularity pull shub://vsoch/hello-world
Example: Pull an Ubuntu container from Docker Hub.
singularity pull docker://ubuntu
Downloading containers from the hubs is not the only way to get one. You can, for example get a copy from your colleague's computer or directory. If you would like to create your own container you can start from the user guide below. If you have any questions, please contact OSC Help.
Running a container
There are four ways to run a container under Singularity.
You can do this either in a batch job or on a login node. (Don’t run on a login node if the container will be performing heavy computation, of course.)
We note that the operating system on Owens is Red Hat:
[owens-login01]$ cat /etc/os-release NAME="Red Hat Enterprise Linux Server" VERSION="7.3 (Maipo)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.3" PRETTY_NAME="Red Hat Enterprise Linux" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:server" HOME_URL="https://www.redhat.com/" BUG_REPORT_URL="https://bugzilla.redhat.com/" REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.3 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.3"
In the examples below we will often check the operating system to show that we are really inside a container.
Run container like a native command
If you simply run the container image it will execute the container’s runscript.
Example: Run an Ubuntu shell. (See also the “shell” sub-command below.)
[owens-login01]$ ./ubuntu.img [owens-login01]$ cat /etc/os-release NAME="Ubuntu" VERSION="16.04.3 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.3 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial [owens-login01]$ exit exit [owens-login01]$
Note that this container returns you to your native OS after you run it.
[owens-login01]$ ./vsoch-hello-world-master.img RaawwWWWWWRRRR!! [owens-login01]$
Use the “run” sub-command
The Singularity “run” sub-command does the same thing as running a container directly as described above. That is, it executes the container’s runscript.
Example: Run a container from a local file
[owens-login01]$ singularity run vsoch-hello-world-master.img RaawwWWWWWRRRR!! [owens-login01]$
Example: Run a container from a hub without explicitly downloading it
[owens-login01]$ singularity run shub://vsoch/hello-world Progress |===================================| 100.0% RaawwWWWWWRRRR!! [owens-login01]$
Use the “exec” sub-command
The Singularity “exec” sub-command lets you execute an arbitrary command within your container instead of just the runscript.
Example: Find out what operating system the
vsoch/hello-world container uses
[owens-login01]$ singularity exec vsoch-hello-world-master.img cat /etc/os-release NAME="Ubuntu" VERSION="14.04.5 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.5 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" [owens-login01]$
Use the “shell” sub-command
The Singularity “shell” sub-command invokes an interactive shell within a container.
Example: Examine the contents of the
Note: A container’s runscript is the file
/singularity within the container.
[owens-login01]$ singularity shell vsoch-hello-world-master.img Singularity: Invoking an interactive shell within container... Singularity vsoch-hello-world-master.img:~/singularity> ls / bin dev home lib64 media opt rawr.sh run singularity sys users var boot etc lib lost+found mnt proc root sbin srv tmp usr Singularity vsoch-hello-world-master.img:~/singularity> cat /singularity #!/bin/sh exec /bin/bash /rawr.sh Singularity vsoch-hello-world-master.img:~/singularity> cat /rawr.sh #!/bin/bash echo "RaawwWWWWWRRRR!!" Singularity vsoch-hello-world-master.img:~/singularity> exit exit [owens-login01]$
Example: Run an Ubuntu shell. Note the “Singularity” prompt within the shell.
[owens-login01]$ singularity shell ubuntu.img Singularity: Invoking an interactive shell within container... Singularity ubuntu.img:~/singularity> cat /singularity #!/bin/sh exec /bin/bash "$@"Singularity ubuntu.img:~/singularity> exit exit [owens-login01]$
File system access
When you use a container you run within the container’s environment. The directories available to you by default from the host environment (Owens) are
- your home directory
- working directory (directory you were in when you ran the container)
You can review our Available File Systems page for more details about our file system access policy.
If you run the container within a job you will have the usual access to the
$PFSDIR environment variable with adding node attribute "
pfsdir" in the job request (
nodes=XX:ppn=XX:pfsdir). You can access most of our file systems from a container without any special treatment, however, if you need
TMPDIR you must specify it on the command line as in this example:
SINGULARITYENV_TMPDIR=$TMPDIR singularity shell ubuntu.img
GPU usage within a container
If you have a GPU-enabled container you can easily run it on Owens just by adding the
--nv flag to the singularity exec or run command. The example below comes from the exec command section of Singularity User Guide. It runs a TensorFlow example using a GPU on Owens. (Output has been omitted from the example for brevity.)
[owens-login01]$ qsub -I -l nodes=1:ppn=28:gpus=1
...[o0756]$ cd $PBS_O_WORKDIR [o0756]$ module use /usr/local/share/lmodfiles/project/osc [o0756]$ module load singularity
git clone https://github.com/tensorflow/models.git
singularity exec --nv docker://tensorflow/tensorflow:latest-gpu \ python ./models/tutorials/image/mnist/convolutional.py