Skip to content

Singularity Containers (v3.2.1)

Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. Did you already invest in Docker? The Singularity software can import your Docker images without having Docker installed or being a superuser. Need to share your code? Put it in a Singularity container and your collaborator won’t have to go through the pain of installing missing dependencies. Do you need to run a different operating system entirely? You can “swap out” the operating system on your host for a different one within a Singularity container. As the user, you are in control of the extent to which your container interacts with its host. There can be seamless integration, or little to no communication at all.

Singularity v3.2.1 Quick Start

Introduction

It’s pretty simple. You can make and customize singularity containers locally, and then run them on ARIS shared resources.

Make a container (LOCAL)

In order to create a singularity image, you will need root on your local machine and also locally installed your own copy of singularity.

sudo singularity create --size 2048 centos7-container.img

Import a Docker Container (LOCAL)

You can import any docker image from the Docker repository (Docker Hub):

singularity import  centos7-container.img docker://centos:7

or

Write in the container (LOCAL)

By default, containers run in read only. You can add --writableflag to write inside the container.

sudo singularity shell --writable centos7-contaner.img

Singularity centos7-container.img:/root> mkdir -p /users
Singularity centos7-container.img:/root> mkdir -p /work
Singularity centos7-container.img:/root> mkdir -p /work2
Singularity centos7-container.img:/root> mkdir -p /apps
exit

Note

To integrate your containers into the ARIS system there a few directories that you must create.

  • /users path for access to your ARIS home directory
  • /work path for access to the ARIS work file system.
  • /work2 path for access to the ARIS work file system.
  • /apps path for access to the ARIS apps directory.

Run image on the cluster

Upload image to ARIS

scp centos7-container.img login.aris.grnet.gr:centos7-container.img

Execute

singularity exec centos7-container.img pwd

Enter a shell (read only)

singularity shell centos7-container.img

Batch Execute (single node)

#SBATCH --ntasks=1 # Total number of tasks
#SBATCH --nodes=1 # Total number of nodes requested
#SBATCH --ntasks-per-node=1 # Tasks per node
#SBATCH --mem=2800 # Memory per job in MB
#SBATCH -t 00:30:00 # Run time (hh:mm:ss) - (max 48h)
#SBATCH --partition=taskp # Submit queue
#SBATCH -A testproj # Accounting project

singularity exec centos7-container.img pwd

Bind directories

Make paths /users /work,/work2 and /apps available inside the container with the --bind parameter. (Mount point must already exist in the image file , in your container shell run mkdir /users/ , mkdir /work etc…)

MPI

Running MPI jobs in a singularity container will require some work. Please see the Container MPI

Prepare the MPI container

Unless MPI is already installed inside your container: follow the installation directions for OpenMPI.

Instructions are listed for RedHat based images

yum groupinstall "Development Tools" -y
yum install -y wget

wget https://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.2.tar.bz2
bzip2 -d openmpi-2.1.2.tar.bz2
tar -xvf openmpi-2.1.2.tar
cd openmpi-2.1.2

./configure --prefix=/usr/local
make
make install

## Check mpi examples
mpicc examples/ring_c.c -o ring
cp ./ring /usr/bin/

Single Host

#SBATCH --ntasks=1 # Total number of tasks
#SBATCH --nodes=1 # Total number of nodes requested
#SBATCH --ntasks-per-node=1 # Tasks per node
#SBATCH --mem=2800 # Memory per job in MB
#SBATCH -t 00:30:00 # Run time (hh:mm:ss) - (max 48h)
#SBATCH --partition=taskp # Submit queue
#SBATCH -A testproj # Accounting project

#Singularity
singularity exec centos_mpi.img mpirun -np 4 /usr/bin/ring

###
Process 0 sending 10 to 1, tag 201 (4 processes in ring)
Process 0 sent to 1
Process 0 decremented value: 9
Process 0 decremented value: 8
Process 0 decremented value: 7
Process 0 decremented value: 6
Process 0 decremented value: 5
Process 0 decremented value: 4
Process 0 decremented value: 3
Process 0 decremented value: 2
Process 0 decremented value: 1
Process 0 decremented value: 0
Process 0 exiting
Process 1 exiting
Process 2 exiting
Process 3 exiting

Multiple Host

Run the MPI program within the container by calling the MPIRUN on the host.

Note

Singularity native support is expected for Open MPI 2.1

#SBATCH --job-name=singularity # Job name
#SBATCH --output=singularity.%j.out # Stdout (%j expands to jobId)
#SBATCH --error=singularity.%j.err # Stderr (%j expands to jobId)
#SBATCH --ntasks=2 # Total number of tasks
#SBATCH --nodes=2 # Total number of nodes requested
#SBATCH --ntasks-per-node=1 # Tasks per node
#SBATCH --cpus-per-task=1 # Threads per task(=1) for pure MPI
#SBATCH -t 00:30:00 # Run time (hh:mm:ss) - (max 48h)
#SBATCH --partition=taskp # Submit queue
#SBATCH -A testproj # Accounting project

# Load any necessary modules

module purge
module load gnu/4.9.2
module load openmpi/2.1.2/gnu

singularity exec centos7_mpi.img whoami
# Launch the executable (srun not working for this setup)
mpirun -np 2 singularity exec centos7_mpi.img hostname
mpirun -np 2 singularity exec centos7_mpi.img /usr/bin/ring

###
Process 0 sending 10 to 1, tag 201 (2 processes in ring)
Process 0 sent to 1
Process 0 decremented value: 9
Process 0 decremented value: 8
Process 0 decremented value: 7
Process 0 decremented value: 6
Process 0 decremented value: 5
Process 0 decremented value: 4
Process 0 decremented value: 3
Process 0 decremented value: 2
Process 0 decremented value: 1
Process 0 decremented value: 0
Process 0 exiting
Process 1 exiting