Skip to content

Bazzite AI OS Deployment

Run Bazzite AI workloads with ujust commands on Bazzite AI OS.

Prerequisites

Step Command Description Recording
1 Install Bazzite AI OS Base installation -
2 ujust config gpu setup GPU container support

Pod Services

Ollama (LLM Inference)

Step Command Description Recording
1 ujust ollama config Configure
2 ujust ollama start Start
3 ujust ollama status Verify

Jupyter (Notebooks)

Step Command Description Recording
1 ujust jupyter config Configure
2 ujust jupyter start Start
3 ujust jupyter status Verify

ComfyUI (Image Generation)

Step Command Description Recording
1 ujust comfyui config Configure
2 ujust comfyui start Start
3 ujust comfyui status Verify

HPC Containers (Apptainer)

For ML training and development without Podman Quadlets:

Command Description Recording
ujust apptainer pull Download image
ujust apptainer shell Interactive shell
ujust apptainer run Run command
ujust apptainer gpu GPU detection

Example workflow:

ujust apptainer pull -i nvidia-python -t stable
ujust apptainer shell -i nvidia-python
# Inside container: run your training

Multi-Instance Support

Run multiple instances of any workload:

# First Ollama instance (default, port 11434)
ujust ollama config
ujust ollama start

# Second Ollama instance (port 11435)
ujust ollama config -n 2 --port=11435
ujust ollama start -n 2

# Manage instances
ujust ollama status -n 2
ujust ollama logs -n 2
ujust ollama stop -n 2

GPU Verification

Command Description Recording
ujust config gpu status GPU config status
ujust apptainer gpu GPU in containers

Troubleshooting

Pod Won't Start

Command Description
ujust <pod> status Check service status
ujust <pod> logs View logs
ujust <pod> delete Remove and recreate

GPU Not Detected

Command Description Recording
ujust config gpu setup Re-run GPU setup
ujust config gpu status Verify GPU config

See Also