Apptainer - HPC Container Management¶
Overview¶
The apptainer command manages Apptainer (formerly Singularity) containers for HPC-compatible workloads. It provides SIF image management with automatic GPU detection.
Key Concept: Apptainer is the HPC standard. Unlike Docker/Podman, containers run as the user (no root). SIF files are single-file images.
Quick Reference¶
| Action | Command | Description |
|---|---|---|
| Build | ujust apptainer build DEF | Build SIF from definition file |
| Cache | ujust apptainer cache [clean\|status] | Manage Apptainer cache |
| Exec | ujust apptainer exec IMAGE CMD | Execute specific command in container |
| Inspect | ujust apptainer inspect IMAGE | Show SIF file metadata |
| Pull | ujust apptainer pull IMAGE | Download container image to SIF file |
| Run | ujust apptainer run IMAGE | Run container with default command |
| Shell | ujust apptainer shell [-- CMD] | Open interactive shell in container |
Parameters¶
| Parameter | Long Flag | Short | Default | Description |
|---|---|---|---|---|
| action | (positional) | - | required | Action: pull, run, shell, exec, build, inspect, gpu, cache |
| image | --image | -i | "" | SIF file path, image name, or DEF file |
| tag | --tag | -t | "" | Image tag, output file, or cache subaction |
| cmd | (variadic) | - | "" | Command to execute (use -- separator) |
Pull Images¶
bazzite-ai Pod Images¶
# Pull nvidia-python (long form)
ujust apptainer pull --image=nvidia-python
# Pull with tag (long form)
ujust apptainer pull --image=nvidia-python --tag=testing
# Pull nvidia-python (short form)
ujust apptainer pull -i nvidia-python
# Pull with tag (short form)
ujust apptainer pull -i nvidia-python -t testing
# Pull jupyter
ujust apptainer pull --image=jupyter --tag=stable
External Images¶
# Docker Hub
ujust apptainer pull --image=docker://ubuntu:22.04
# NVIDIA NGC
ujust apptainer pull --image=docker://nvcr.io/nvidia/pytorch:latest
# Sylabs Cloud
ujust apptainer pull --image=library://sylabsed/examples/lolcow
Pull Output¶
Images are saved as SIF files:
Run Containers¶
Run with Default Command¶
# Run nvidia-python (long form)
ujust apptainer run --image=nvidia-python
# Run nvidia-python (short form)
ujust apptainer run -i nvidia-python
# Run specific SIF file
ujust apptainer run --image=./my-container.sif
Run with Command¶
# Run Python in container (use -- separator for commands)
ujust apptainer run --image=nvidia-python -- python
# Run script
ujust apptainer run --image=nvidia-python -- python script.py
# Short form
ujust apptainer run -i nvidia-python -- python train.py
GPU Auto-Detection¶
GPU flags are auto-detected:
- NVIDIA: Adds
--nv - AMD: Adds
--rocm
# GPU is automatically enabled
ujust apptainer run --image=nvidia-python -- python -c "import torch; print(torch.cuda.is_available())"
Interactive Shell¶
# Shell into container (long form)
ujust apptainer shell --image=nvidia-python
# Shell into container (short form)
ujust apptainer shell -i nvidia-python
# Now inside container
python --version
nvidia-smi
exit
Execute Commands¶
# Execute single command (use -- separator)
ujust apptainer exec --image=nvidia-python -- pip list
# Execute Python one-liner
ujust apptainer exec -i nvidia-python -- python -c 'print(1+1)'
Build from Definition¶
Definition File Example¶
Bootstrap: docker
From: ubuntu:22.04
%post
apt-get update
apt-get install -y python3 python3-pip
%runscript
python3 "$@"
Build¶
# Build SIF from definition (image=DEF, tag=OUTPUT)
ujust apptainer build --image=mydef.def --tag=myimage.sif
# Build to default location
ujust apptainer build --image=mydef.def
# Short form
ujust apptainer build -i mydef.def -t myimage.sif
GPU Support¶
Test GPU¶
GPU Flags¶
| GPU | Flag | Auto-Detection |
|---|---|---|
| NVIDIA | --nv | Yes |
| AMD | --rocm | Yes |
| Intel | (none yet) | No |
Manual GPU Override¶
Cache Management¶
List Cache¶
Clean Cache¶
Cache is stored in ~/.apptainer/cache/.
Common Workflows¶
HPC Development¶
# Pull HPC-ready image
ujust apptainer pull --image=nvidia-python
# Test GPU
ujust apptainer gpu
# Development shell
ujust apptainer shell --image=nvidia-python
# Run production workload
ujust apptainer run --image=nvidia-python -- python train.py
Use NGC Images¶
# Pull NVIDIA PyTorch
ujust apptainer pull --image=docker://nvcr.io/nvidia/pytorch:23.10-py3
# Run training
ujust apptainer run --image=pytorch_23.10-py3.sif -- python train.py
Build Custom Image¶
# Create definition file
cat > myenv.def << 'EOF'
Bootstrap: docker
From: python:3.11
%post
pip install numpy pandas scikit-learn
%runscript
python "$@"
EOF
# Build
ujust apptainer build --image=myenv.def --tag=myenv.sif
# Test
ujust apptainer run --image=myenv.sif -- python -c "import numpy; print(numpy.__version__)"
Apptainer vs Docker/Podman¶
| Feature | Apptainer | Docker/Podman |
|---|---|---|
| Root required | No | Sometimes |
| Single file | Yes (SIF) | No (layers) |
| HPC compatible | Yes | Limited |
| GPU support | --nv, --rocm | nvidia-docker |
| Security model | User namespace | Container namespace |
Use Apptainer when:
- Running on HPC clusters
- Need single-file portability
- Can't run as root
- Need reproducibility
Troubleshooting¶
Pull Failed¶
Check:
Fix:
GPU Not Available¶
Check:
Fix:
SIF File Corrupted¶
Fix:
Cache Too Large¶
Check:
Fix:
Cross-References¶
- Related Skills:
pod(build OCI images),jupyter(uses containers) - GPU Setup:
ujust config gpu setup - Apptainer Docs: https://apptainer.org/docs/
When to Use This Skill
Use when the user asks about:
- "apptainer", "singularity", "HPC container"
- "SIF file", "pull image", "build container"
- "apptainer GPU", "run with GPU"
- "HPC workload", "cluster container"