Skip to content

Podman Quadlets

Run any Bazzite Pod as a systemd user service using Podman Quadlets.

What are Quadlets?

Quadlets are systemd unit files that define containers. Instead of running podman run manually, systemd manages your containers:

Benefits:

  • Auto-start on login - Containers start automatically when you log in
  • Restart policies - Automatic restart on failure
  • Service management - Use familiar systemctl commands
  • Integration - Works with systemd timers, dependencies, and monitoring
  • No daemon - Containers run directly under systemd

Prerequisites

  • Podman 4.4+ (Fedora 38+, Ubuntu 23.04+, RHEL 9+, or Bazzite AI OS)
  • Systemd user session (default on most desktop Linux)

Check your Podman version:

podman --version
# podman version 5.0.0 or higher

For headless servers (SSH-only):

# Enable lingering for user services without login
loginctl enable-linger $USER

Quick Start

1. Create Quadlet Directory

mkdir -p ~/.config/containers/systemd

2. Create a Quadlet File

Create ~/.config/containers/systemd/jupyter.container:

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
PublishPort=8888:8888
Volume=%h/notebooks:/workspace

[Service]
Restart=always

[Install]
WantedBy=default.target

3. Reload and Start

# Reload systemd to detect new quadlet
systemctl --user daemon-reload

# Enable and start the service
systemctl --user enable --now jupyter

# Check status
systemctl --user status jupyter

4. Access Your Service

Open http://localhost:8888 for JupyterLab.

Example Quadlets for Each Pod

nvidia-python (ML Training)

~/.config/containers/systemd/ml-training.container:

[Unit]
Description=ML Training Environment

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
Volume=%h/ml-projects:/workspace
AddDevice=nvidia.com/gpu=all
Environment=CUDA_VISIBLE_DEVICES=0

[Service]
Restart=on-failure
RestartSec=30

[Install]
WantedBy=default.target

Usage:

systemctl --user enable --now ml-training
podman exec -it systemd-ml-training bash

jupyter (JupyterLab)

~/.config/containers/systemd/jupyter.container:

[Unit]
Description=JupyterLab Server

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
PublishPort=8888:8888
Volume=%h/notebooks:/workspace
AddDevice=nvidia.com/gpu=all

[Service]
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

Access: http://localhost:8888

devops (Cloud Tools)

~/.config/containers/systemd/devops.container:

[Unit]
Description=DevOps Tools Environment

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-devops:stable
Volume=%h/infrastructure:/workspace
Volume=%h/.aws:/home/jovian/.aws:ro
Volume=%h/.kube:/home/jovian/.kube:ro
Volume=%h/.ssh:/home/jovian/.ssh:ro

[Service]
# Don't restart - interactive use
Restart=no

[Install]
WantedBy=default.target

Usage:

systemctl --user start devops
podman exec -it systemd-devops bash
# Run kubectl, aws, helm commands...

playwright (Browser Automation)

~/.config/containers/systemd/playwright.container:

[Unit]
Description=Playwright Browser Automation

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-playwright:stable
PublishPort=5900:5900
Volume=%h/automation:/workspace

[Service]
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

Access VNC: Connect to localhost:5900 with a VNC client.

githubrunner (CI/CD Runner)

~/.config/containers/systemd/github-runner.container:

[Unit]
Description=GitHub Actions Runner

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-githubrunner:stable
Volume=%h/.config/github-runners:/config:ro
AddDevice=nvidia.com/gpu=all
Environment=REPO_URL=https://github.com/owner/repo
Environment=RUNNER_NAME=my-runner

[Service]
Restart=always
RestartSec=30

[Install]
WantedBy=default.target

Note: See just/bazzite-ai/lib/github-runner-quadlet.just for advanced configuration with auto-token management.

Quadlet File Reference

[Container] Section

Option Description Example
Image Container image ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
PublishPort Port mapping 8888:8888
Volume Volume mount %h/data:/workspace
AddDevice Device passthrough nvidia.com/gpu=all
Environment Environment variable CUDA_VISIBLE_DEVICES=0
Exec Override entrypoint python /workspace/train.py
User Run as user jovian

Special variables:

  • %h - User's home directory
  • %u - Username
  • %U - User UID

[Service] Section

Option Description Values
Restart Restart policy no, always, on-failure
RestartSec Delay between restarts 10 (seconds)
TimeoutStartSec Startup timeout 300
TimeoutStopSec Shutdown timeout 120

[Install] Section

Option Description Value
WantedBy When to start default.target (on login)

GPU Support with CDI

Container Device Interface (CDI) enables GPU access in rootless containers.

Check CDI Support

# List available CDI devices
podman info --format '{{.Host.CDIDevice}}'

# Or check CDI spec files
ls /etc/cdi/ /var/run/cdi/ 2>/dev/null

NVIDIA GPU Configuration

On Bazzite AI OS or systems with nvidia-container-toolkit:

# Generate CDI spec (run once)
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

Then use in quadlet:

[Container]
AddDevice=nvidia.com/gpu=all

Specific GPU Selection

[Container]
# First GPU only
AddDevice=nvidia.com/gpu=0

# Multiple specific GPUs
AddDevice=nvidia.com/gpu=0
AddDevice=nvidia.com/gpu=1

Health Checks

Add health monitoring to your quadlets:

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
PublishPort=8888:8888
Volume=%h/notebooks:/workspace
HealthCmd=curl -sf http://localhost:8888/api || exit 1
HealthInterval=30s
HealthTimeout=10s
HealthRetries=3
HealthStartPeriod=60s

[Service]
Restart=always

Managing Quadlet Services

Status and Logs

# Check service status
systemctl --user status jupyter

# Follow logs
journalctl --user -u jupyter -f

# View recent logs
journalctl --user -u jupyter -n 100

Start/Stop/Restart

# Start service
systemctl --user start jupyter

# Stop service
systemctl --user stop jupyter

# Restart service
systemctl --user restart jupyter

Enable/Disable Auto-Start

# Enable on login
systemctl --user enable jupyter

# Disable auto-start
systemctl --user disable jupyter

# One-time start (don't enable)
systemctl --user start jupyter

Interactive Access

# Get a shell in running container
podman exec -it systemd-jupyter bash

# The container name is systemd-<quadlet-name>
podman exec -it systemd-devops bash

Remove a Quadlet

# Stop and disable
systemctl --user disable --now jupyter

# Remove quadlet file
rm ~/.config/containers/systemd/jupyter.container

# Reload
systemctl --user daemon-reload

Advanced Configuration

Multiple Instances with Templates

Create a template quadlet ~/.config/containers/systemd/jupyter@.container:

[Unit]
Description=JupyterLab Instance %i

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
PublishPort=%i:8888
Volume=%h/notebooks-%i:/workspace

[Service]
Restart=always

[Install]
WantedBy=default.target

Usage:

# Start instance on port 8888
systemctl --user enable --now jupyter@8888

# Start another on port 8889
systemctl --user enable --now jupyter@8889

Dependencies

Start after another service:

[Unit]
Description=ML Training
After=jupyter.service
Requires=jupyter.service

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
...

Resource Limits

[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
PodmanArgs=--memory=16g --cpus=4

Troubleshooting

Service fails to start

# Check detailed logs
journalctl --user -u jupyter -n 50

# Check if image exists
podman images | grep jupyter

# Pull image manually
podman pull ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable

GPU not available

# Verify CDI spec exists
ls /etc/cdi/nvidia.yaml

# Regenerate CDI spec
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

# Test GPU access
podman run --rm --device nvidia.com/gpu=all \
  ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable nvidia-smi

Port already in use

# Check what's using the port
ss -tlnp | grep 8888

# Use a different port in quadlet
PublishPort=9999:8888

Container name conflicts

Quadlet container names are systemd-<filename>. If you have conflicts:

# Remove orphaned containers
podman rm -f systemd-jupyter

# Reload and restart
systemctl --user daemon-reload
systemctl --user restart jupyter

User services not starting on boot

# Enable lingering (for SSH-only access)
loginctl enable-linger $USER

# Verify
ls /var/lib/systemd/linger/

See Also