Skip to content

Deployment Options

Bazzite Pods are standard OCI containers. Choose your deployment method:

graph LR
    pod[Bazzite Pod<br/>OCI Container Image]

    pod --> desktop["Desktop<br/>Bazzite AI OS"]
    pod --> local["Local Development<br/>Docker / Podman"]
    pod --> ide["IDE Integration<br/>Dev Containers"]
    pod --> service["Systemd Services<br/>Podman Quadlets"]
    pod --> cloud["Production<br/>Kubernetes"]
    pod --> research["Research<br/>HPC + Apptainer"]

    desktop --> ujust[ujust convenience commands]
    local --> laptop[Any Laptop/Workstation<br/>Linux, macOS, Windows]
    ide --> vscode[VS Code]
    ide --> codespaces[GitHub Codespaces]
    ide --> jetbrains[JetBrains IDEs]
    service --> autostart[Auto-start on Login]
    cloud --> eks[AWS EKS]
    cloud --> gke[Google GKE]
    cloud --> aks[Azure AKS]
    research --> slurm[Slurm Clusters]
    research --> pbs[PBS/SGE]

Deployment Methods

Environment Description Guide
Bazzite AI OS Desktop with ujust commands Bazzite AI OS Guide
Docker / Podman Linux, macOS, Windows Docker/Podman Guide
Dev Containers VS Code, Codespaces, JetBrains Dev Containers Guide
Podman Quadlets Systemd services, auto-start Quadlets Guide
Kubernetes Scalable workloads Kubernetes Guide
HPC (Apptainer) Research clusters, Slurm HPC Guide

Image Registry

All pods are available from GitHub Container Registry:

ghcr.io/atrawog/bazzite-ai-pod-<variant>:stable

Available Variants

Variant GPU Size Description
base No ~2GB Development foundation
nvidia Yes ~3GB CUDA toolkit
nvidia-python Yes ~6GB PyTorch ML
jupyter Yes ~11GB JupyterLab
devops No ~4GB Cloud tools
playwright Optional ~5GB Browser automation
githubrunner No ~3GB CI/CD

GPU Support Matrix

GPU Docker/Podman Kubernetes HPC
NVIDIA --gpus all NVIDIA Device Plugin apptainer --nv
AMD --device=/dev/dri AMD GPU Operator apptainer --rocm
Intel --device=/dev/dri Intel GPU Plugin Mount /dev/dri

Quick Examples

Bazzite AI OS

ujust apptainer-run-pod-nvidia-python

Docker/Podman

# NVIDIA GPU
docker run -it --rm --gpus all -v $(pwd):/workspace \
  ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

# CPU-only
docker run -it --rm -v $(pwd):/workspace \
  ghcr.io/atrawog/bazzite-ai-pod-devops:stable

Kubernetes

apiVersion: batch/v1
kind: Job
metadata:
  name: ml-training
spec:
  template:
    spec:
      containers:
      - name: pytorch
        image: ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
        resources:
          limits:
            nvidia.com/gpu: 1
      restartPolicy: OnFailure

HPC (Apptainer)

# Pull once
apptainer pull docker://ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

# Run with GPU
apptainer exec --nv bazzite-ai-pod-nvidia-python_stable.sif bash

Dev Containers (VS Code)

# Open project in VS Code, then:
# Command Palette → "Dev Containers: Reopen in Container"
# Select variant: jupyter, devops, base, or githubrunner

Podman Quadlets

# Create quadlet file in ~/.config/containers/systemd/jupyter.container
# Then enable as systemd service:
systemctl --user enable --now jupyter

Common Patterns

Volume Mounting

Your working directory mounts at /workspace:

docker run -v $(pwd):/workspace ...

Credential Mounting

# AWS credentials
-v ~/.aws:/home/jovian/.aws:ro

# Kubernetes config
-v ~/.kube:/home/jovian/.kube:ro

# SSH keys
-v ~/.ssh:/home/jovian/.ssh:ro

Port Publishing

# JupyterLab
-p 8888:8888

# VNC (Playwright)
-p 5900:5900

Next Steps