jupyter Pod¶
Standard OCI container - works with Docker, Podman, Kubernetes, Apptainer.
The jupyter pod provides a JupyterLab server for interactive data science and ML development, with full GPU support inherited from nvidia-python.
Overview¶
| Attribute | Value |
|---|---|
| Image | ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable |
| Size | ~11GB |
| GPU | NVIDIA (CUDA 12.4) |
| Inherits | pod-nvidia-python |
| Port | 8888 |
Quick Start¶
# Start JupyterLab (access at http://localhost:8888)
docker run -it --rm --gpus all -p 8888:8888 -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
# CPU-only
docker run -it --rm -p 8888:8888 -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
# Different port (9999)
docker run -it --rm --gpus all -p 9999:8888 -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyterlab
spec:
replicas: 1
selector:
matchLabels:
app: jupyterlab
template:
metadata:
labels:
app: jupyterlab
spec:
containers:
- name: jupyter
image: ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
ports:
- containerPort: 8888
resources:
limits:
nvidia.com/gpu: 1
---
apiVersion: v1
kind: Service
metadata:
name: jupyterlab
spec:
selector:
app: jupyterlab
ports:
- port: 8888
targetPort: 8888
What's Included¶
JupyterLab¶
- Full JupyterLab server
- Opens in
/workspace(your mounted files) - Token-less access for local development
- All standard notebook features
From nvidia-python Pod¶
- PyTorch with CUDA 12.4
- torchvision, torchaudio
- Python ML environment via pixi
From nvidia Pod¶
- CUDA Toolkit 13.0
- cuDNN, TensorRT
From base Pod¶
- Python 3.13, Node.js 23+, Go, Rust
- VS Code, Docker CLI, kubectl, Helm
Usage¶
Accessing Notebooks¶
- Open your browser to
http://localhost:8888 - No token required (local development mode)
- Your workspace files appear in the file browser
Creating a New Notebook¶
- Click File → New → Notebook
- Select Python 3 (ipykernel)
- Start coding!
GPU in Notebooks¶
# Cell 1: Verify GPU
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"GPU: {torch.cuda.get_device_name(0)}")
# Cell 2: Train on GPU
model = model.cuda()
data = data.cuda()
output = model(data)
Configuration¶
Environment Location¶
/opt/pixi/
├── pixi.toml # Jupyter + ML environment config
├── pixi.lock # Locked dependencies
└── .pixi/ # Installed packages
Default Settings¶
- Port: 8888
- Token: Disabled (local development)
- Working directory:
/workspace
Common Workflows¶
Data Science Project¶
my-project/
├── data/
│ └── dataset.csv
├── notebooks/
│ ├── 01-exploration.ipynb
│ └── 02-modeling.ipynb
├── src/
│ └── utils.py
└── requirements.txt
cd my-project
docker run -it --rm --gpus all -p 8888:8888 -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
# Open http://localhost:8888 → notebooks/
Install Additional Packages¶
In a notebook cell:
# Install with pip
!pip install transformers datasets
# Or from requirements.txt
!pip install -r /workspace/requirements.txt
Export Notebook to Script¶
Troubleshooting¶
JupyterLab Won't Start¶
Check if port 8888 is already in use:
Kernel Dies¶
Usually out of GPU memory:
# In notebook, before training:
import torch
torch.cuda.empty_cache()
# Use smaller batch sizes
batch_size = 16 # Instead of 64
Can't Access Files¶
Ensure you mounted the correct directory:
# Wrong: no volume mount
docker run ... ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
# /workspace is empty!
# Right: mount your project
docker run ... -v $(pwd):/workspace ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
# /workspace has your files
See Also¶
- nvidia-python pod - For script-based ML (no notebooks)
- Deployment Guide - All deployment methods
- Pod Architecture - How pods relate to each other