Skip to content

Justfile Patterns

This guide documents the patterns and syntax used in Bazzite AI justfiles, including Just 1.46.0 flag syntax.


Interactive vs Non-Interactive Behavior

All ujust commands follow a universal behavior pattern:

Invocation Behavior
ujust <command> (no action) Show menu of available actions
ujust <command> <action> Non-interactive, use defaults
ujust <command> config Interactive wizard (only exception)

Examples

# Interactive - shows menu
ujust jupyter

# Non-interactive - executes immediately
ujust jupyter start
ujust jupyter start --port=9999
ujust jupyter start -p 9999

Configuration Precedence

When a command runs, values are resolved in this priority order:

Source Example Priority
CLI flags --port=9999 Highest - always wins
Saved config ~/.config/jupyter/1/config Second - used if no CLI flag
Built-in defaults port=8888 in recipe Lowest - fallback only

Zero-Friction Path

In interactive config mode, current values are shown. Users can press Enter to keep existing values.


Just 1.46.0 Parameter Attributes

All pod commands use Just 1.46.0 native flag syntax via [arg] attributes.

Core Parameter Attributes

Every pod command includes these core attributes:

# Core attributes - ALL pods MUST use these
[arg("config_dir", long="config-dir", short="c")]
[arg("workspace_dir", long="workspace-dir", short="w")]
[arg("bind", long="bind", short="b")]
[arg("port", long="port", short="p")]
[arg("image", long="image", short="i")]
[arg("tag", long="tag", short="t")]
[arg("gpu_type", long="gpu-type", short="g")]
[arg("lines", long="lines", short="l")]
[arg("instance", long="instance", short="n")]
[group("bazzite-ai")]
podname action config_dir="" workspace_dir="" bind="127.0.0.1" port="8888" \
        image="" tag="stable" gpu_type="auto" lines="50" instance="1" *cmd:

Core Parameter Table

Parameter Long Flag Short Default Description
action (positional) - required Command action (config, start, stop, etc.)
config_dir --config-dir -c "" Pod config directory
workspace_dir --workspace-dir -w "" Workspace mount
bind --bind -b 127.0.0.1 Bind address
port --port -p service-specific Service port
image --image -i service-specific Container image
tag --tag -t stable Image tag
gpu_type --gpu-type -g auto GPU type: auto/nvidia/amd/intel/none
lines --lines -l 50 Log lines to show
instance --instance -n 1 Instance number
cmd (variadic *cmd) - "" Shell command (use -- separator)

Rule

action is ALWAYS positional first, *cmd is ALWAYS variadic last.

User Invocation Examples

# Basic operations
ujust ollama config                              # All defaults
ujust ollama start --model=llama3.2              # Override model
ujust ollama start -m llama3.2 -p 11435          # Short form
ujust openwebui start --port=3001 --bind=0.0.0.0 # Override port and bind

# Override config location
ujust ollama config -c /mnt/fast-nvme/ollama

# View available options
just --usage ollama

Service-Specific Attributes

Ollama

[arg("model", long="model", short="m")]
[arg("prompt", long="prompt")]
[arg("context_length", long="context-length")]
Parameter Long Short Default Description
model --model -m qwen3:4b Model name for inference
prompt --prompt - say hi Prompt for run action
context_length --context-length - 8192 Context window size

ComfyUI

[arg("models_dir", long="models-dir")]
[arg("output_dir", long="output-dir")]
[arg("input_dir", long="input-dir")]
[arg("nodes_dir", long="nodes-dir")]
[arg("model_url", long="model-url")]
[arg("model_type", long="model-type")]
[arg("node_url", long="node-url")]
Parameter Long Description
models_dir --models-dir Path for SD models
output_dir --output-dir Path for generated images
input_dir --input-dir Path for input images
nodes_dir --nodes-dir Path for custom nodes
model_url --model-url CivitAI model ID or URL
model_type --model-type Model type: checkpoint, lora, vae
node_url --node-url Git URL for custom nodes

GitHub Runners

[arg("repo_url", long="repo-url")]
Parameter Long Description
repo_url --repo-url GitHub repository URL

Shell Command Pattern

All pod commands support shell access via variadic *cmd with -- separator:

# Interactive shell (no command)
ujust ollama shell

# Execute command in container (ALWAYS use -- separator)
ujust ollama shell -- nvidia-smi
ujust ollama shell -- ls -la
ujust jupyter shell -- pip list --outdated
ujust jupyter shell -- python -c "print('hello')"

Required Separator

All shell commands MUST use -- separator. This prevents ambiguity between Just flags and command flags.

Implementation Pattern

[arg("instance", long="instance", short="n")]
podname action instance="1" *cmd:
    #!/usr/bin/bash
    case "$action" in
        shell)
            if [[ -z "{{ cmd }}" ]]; then
                podman exec -it "podname-{{ instance }}" /bin/bash
            else
                podman exec "podname-{{ instance }}" {{ cmd }}
            fi
            ;;
    esac

Pod Image Types

Bazzite AI Generated Images

Custom-built with pre-configured environments:

Pod Default Image Default Tag
ollama ghcr.io/atrawog/bazzite-ai-pod-ollama stable
jupyter ghcr.io/atrawog/bazzite-ai-pod-jupyter stable
comfyui ghcr.io/atrawog/bazzite-ai-pod-comfyui stable

Upstream Docker Images

Third-party images used as-is:

Pod Default Image Default Tag
openwebui ghcr.io/open-webui/open-webui main
jellyfin docker.io/jellyfin/jellyfin latest
fiftyone docker.io/voxel51/fiftyone latest
localai localai/localai latest-gpu-* (auto-selected)

CONFIG_DIR Mount Mappings

Pods that persist application configuration:

Pod CONFIG_DIR Default Container Mount Path
ollama ~/.config/ollama/{INSTANCE}/ /home/jovian/.ollama
openwebui ~/.config/openwebui/{INSTANCE}/ /app/backend/data
fiftyone ~/.config/fiftyone/{INSTANCE}/ /fiftyone
localai ~/.config/localai/{INSTANCE}/ /models

Pods without CONFIG_DIR (use WORKSPACE_DIR only):

  • jupyter
  • comfyui
  • jellyfin
  • runners

GPU Auto-Detection

All pod commands auto-detect and attach available GPUs:

# In quadlet generation
GPU_TYPE=$(just --justfile {{ justfile() }} _pod-detect-gpu)

GPU Types

GPU Type Passthrough Method
nvidia CDI (nvidia.com/gpu=all)
amd Device passthrough (/dev/dri)
intel Device passthrough (/dev/dri)
none No GPU attached
auto Auto-detect best option

Detection Logic

  1. Check for NVIDIA GPU via nvidia-smi
  2. Check for AMD/Intel GPU via /dev/dri
  3. Fall back to none if no GPU found

Network Binding

Security Pattern

Bind Access Security
127.0.0.1 Localhost only Default (secure)
0.0.0.0 All interfaces Requires explicit flag

Usage

# Localhost only (default)
ujust jupyter start

# All interfaces (explicit)
ujust jupyter start --bind=0.0.0.0
ujust jupyter start -b 0.0.0.0

bazzite-ai Network

Pods requiring cross-pod communication use the bazzite-ai network:

# In generated quadlet
Network=bazzite-ai.network
NetworkAlias=openwebui
Environment=OLLAMA_BASE_URL=http://ollama:11434

Multi-Container Patterns

Sidecar Containers

For services requiring companion containers (e.g., FiftyOne + MongoDB):

# Naming convention
fiftyone-1.container      # Primary (app)
fiftyone-mongodb-1.container  # Sidecar

Sidecar Quadlet Requirements:

  • Wants={sidecar}.service in primary quadlet
  • Same network (bazzite-ai.network)
  • NetworkAlias for service discovery
  • Shared volume mounts where needed

Multi-Container Stack

For applications with multiple interdependent containers:

# Dependency ordering in quadlets:
# After=network-online.target fiftyone-mongodb-1.service
# Wants=fiftyone-mongodb-1.service

Stack Commands:

Command Behavior
ujust {pod} start Start all containers in dependency order
ujust {pod} stop Stop all containers (reverse order)
ujust {pod} logs Interleaved logs from all containers
ujust {pod} status Status of all containers
ujust {pod} delete Remove all containers and configs

Interleaved Logs Format:

[fiftyone-mongodb] 2024-01-09 10:00:01 MongoDB started
[fiftyone] 2024-01-09 10:00:02 Connecting to database...
[fiftyone] 2024-01-09 10:00:03 FiftyOne ready on port 5151

Tool Commands (Non-Pod)

Some ujust commands are tools rather than pod services:

Apptainer (HPC Container Management)

[arg("image", long="image", short="i")]
[arg("tag", long="tag", short="t")]
[group("bazzite-ai")]
apptainer action image="" tag="" *cmd:
Parameter Long Short Description
action (positional) - pull, run, shell, exec, build, inspect, gpu, cache
image --image -i SIF file, image name, or DEF file
tag --tag -t Image tag, output file, or cache subaction
cmd (variadic) - Command to execute

Tailscale (Network Service Exposure)

[arg("service", long="service", short="s")]
[arg("port", long="port", short="p")]
[group("bazzite-ai")]
tailscale action service="" port="":
Parameter Long Short Description
action (positional) - serve, unserve, status, list
service --service -s Service name or port number
port --port -p Tailscale HTTPS port to expose

Entry Point Menu Pattern

When run without an action, show available actions:

recipe ACTION="":
    #!/usr/bin/bash
    if [[ -t 0 ]] && [[ -z "$ACTION" ]]; then
        ACTION=$(ugum choose "config" "start" "stop" "logs" "status")
    fi
    case "$ACTION" in
        config) ... ;;
        start) ... ;;
        *) echo "Unknown action: $ACTION" >&2; exit 1 ;;
    esac

Complete Recipe Example

# Ollama recipe with Just 1.46.0 flag syntax
[arg("config_dir", long="config-dir", short="c")]
[arg("workspace_dir", long="workspace-dir", short="w")]
[arg("bind", long="bind", short="b")]
[arg("port", long="port", short="p")]
[arg("image", long="image", short="i")]
[arg("tag", long="tag", short="t")]
[arg("gpu_type", long="gpu-type", short="g")]
[arg("lines", long="lines", short="l")]
[arg("model", long="model", short="m")]
[arg("prompt", long="prompt")]
[arg("context_length", long="context-length")]
[arg("instance", long="instance", short="n")]
[group("bazzite-ai")]
ollama action config_dir="" workspace_dir="" bind="127.0.0.1" port="11434" \
       image="ghcr.io/atrawog/bazzite-ai-pod-ollama" tag="stable" \
       gpu_type="auto" lines="50" model="qwen3:4b" prompt="say hi" \
       context_length="8192" instance="1" *cmd:
    #!/usr/bin/bash
    set -euo pipefail
    # Just handles flag parsing natively - no manual parsing needed
    # Apply defaults for empty values
    instance="${instance:-1}"
    config_dir="${config_dir:-$HOME/.config/ollama/${instance}}"
    # Route to action
    case "$action" in
        shell) just --justfile {{ justfile() }} _ollama-shell "{{ cmd }}" "$instance" ;;
        # ... other actions
    esac