Skip to content

Modules and Containers

Regular users (non-administrators) do not have the privilege to install programs on the system. To install software, they need to coordinate with the cluster administrator. However, compiling and installing the required software manually1 in the user's home directory is possible, albeit a time-consuming and cumbersome task. In this section, we will explore two approaches that facilitate the installation of diverse software packages commonly used in supercomputing.

Environment Modules

The first approach utilizes environment modules, which allow for the inclusion of user software packages. These modules are usually prepared and installed by the cluster administrator, who also adds them to the module directory. Users can then activate or deactivate modules using the module load and module unload commands. Modules can offer different versions of the same software, including variations with or without support for graphics accelerators. To view the full list of available modules, users can employ commands such as module avail and module spider.

$ module spider
--------------------------------------------------------------------------
The following is a list of the modules and extensions currently available:
--------------------------------------------------------------------------
Anaconda3: Anaconda3/5.3.0
  Built to complement the rich, open source Python community, the Anaconda platform provides an
  enterprise-ready data analytics platform that empowers companies to adopt a modern open data
  science analytics architecture.

Autoconf: Autoconf/2.69-GCCcore-7.3.0, ...
  Autoconf is an extensible package of M4 macros that produce shell scripts to automatically
  configure software source code packages. These scripts can adapt the packages to many kinds of
  UNIX-like systems without manual user intervention. Autoconf creates a configuration script for
  a package from a template file that lists the operating system features that the package can
  use, in the form of M4 macro calls. 
...
module spider

During the workshop, we will need the FFmpeg module. It is already installed on the NSC cluster; we just need to load it:

$ module load FFmpeg
module load FFmpeg

To view the modules we have loaded, we can use the module list command:

$ module list
Currently Loaded Modules:
1) GCCcore/11.2.0                 8) x265/3.5-GCCcore-11.2.0            15) xorg-macros/1.19.3-GCCcore-11.2.0
2) NASM/2.15.05-GCCcore-11.2.0    9) expat/2.4.1-GCCcore-11.2.0         16) libpciaccess/0.16-GCCcore-11.2.0
3) zlib/1.2.11-GCCcore-11.2.0    10) libpng/1.6.37-GCCcore-11.2.0       17) X11/20210802-GCCcore-11.2.0
4) bzip2/1.0.8-GCCcore-11.2.0    11) Brotli/1.0.9-GCCcore-11.2.0        18) FriBidi/1.0.10-GCCcore-11.2.0
5) x264/20210613-GCCcore-11.2.0  12) freetype/2.11.0-GCCcore-11.2.0     19) FFmpeg/4.3.2-GCCcore-11.2.0
6) ncurses/6.2-GCCcore-11.2.0    13) util-linux/2.37-GCCcore-11.2.0
7) LAME/3.100-GCCcore-11.2.0     14) fontconfig/2.13.94-GCCcore-11.2.0
module list

Next, we can run ffmpeg. To warm up, let's check the version of the program:

$ ffmpeg -version
ffmpeg version 4.3.2 Copyright (c) 2000-2021 the FFmpeg developers
    built with gcc 11.2.0 (GCC)
    configuration: --prefix=/ceph/grid/software/modules/software/FFmpeg/4.3.2-GCCcore-11.2.0 --enable-pic --enable-shared --enable-gpl --enable-version3 --enable-nonfree --cc=gcc --cxx=g++ --enable-libx264 --enable-libx265 --enable-libmp3lame --enable-libfreetype --enable-fontconfig --enable-libfribidi
    libavutil      56. 51.100 / 56. 51.100
    libavcodec     58. 91.100 / 58. 91.100
    libavformat    58. 45.100 / 58. 45.100
...
ffmpeg -version

All loaded environment modules can be removed using the module purge command.

Containers

The disadvantage of using modules is that they need to be prepared and installed by the administrator. If that is not possible, we can choose an alternative approach and package the required program into a container using Apptainer. A container, such as Apptainer, encapsulates not only our program but also all the other necessary software and libraries it relies on. We can create the container on any computer and then transfer it to the cluster.

Once we have the container ready, we can use it by prefixing the desired command with apptainer exec <container>. A container for FFmpeg (file ffmpeg_alpine.sif) is available here. Transfer it to the cluster and run it using:

$ apptainer exec ffmpeg_alpine.sif ffmpeg -version
appatiner exec ffmpeg_alpine.sif ffmpeg -version

We are presented with the output displaying information about the version of the FFmpeg software. Using the apptainer command, we execute the ffmpeg_alpine.sif container. Within the container, we can then run ffmpeg.

The ffmpeg_alpine.sif container can also be built manually. By searching online using keywords such as ffmpeg, container, and docker, we are likely to come across the website https://hub.docker.com/r/jrottenberg/ffmpeg/, which provides a wide range of container options for FFmpeg. We select the latest version of the smallest container, specifically designed for Alpine Linux, and build it directly on the login node.

$ apptainer pull docker://jrottenberg/ffmpeg:alpine
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 801bfaa63ef2 skipped: already exists
Copying blob 9b7e8ca952e4 done
Copying blob 98b23fe84856 done
Copying config 51503c1049 done
Writing manifest to image destination
Storing signatures
2021/01/30 17:52:54  info unpack layer: sha256:801bfaa63ef2094d770c809815b9e2b9c1194728e5e754ef7bc764030e140cea
2021/01/30 17:52:54  info unpack layer: sha256:9b7e8ca952e42ed8bf6aebd56e420e40d2637d16b4b79404089adfdca1eb841a
2021/01/30 17:52:55  info unpack layer: sha256:98b23fe84856b3e03df3d02226e002119143da9e2e081408499955ccb8d213df
INFO:    Creating SIF file...
apptainer pull docker://jrottenberg/ffmpeg:alpine

Detailed instructions for creating containers can be found at https://apptainer.org/docs/user/latest/.

Several commonly used containers are available to all users on the clusters:

  • On the NSC cluster, you can find them in the directory: /ceph/grid/singularity-images,
  • On the Maister and Trdina clusters, they are located in the directory /ceph/sys/singularity.

  1. (Partial) for compiling FFmpeg can be found in the official documentation