nvidia-python¶
ML/AI development environment with PyTorch and CUDA support for training and inference.
Overview¶
| Attribute | Value |
|---|---|
| Image | ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable |
| Size | ~14GB |
| GPU | NVIDIA, AMD, Intel (auto-detected) |
| Foundation for | jupyter, comfyui, ollama pods |
Quick Start with Apptainer¶
On Bazzite AI OS, use ujust apptainer for HPC-style container access:
| Step | Command | Description | Recording |
|---|---|---|---|
| 1 | ujust apptainer pull | Download image | |
| 2 | ujust apptainer shell | Open shell | |
| 3 | ujust apptainer gpu | Check GPU |
Example usage:
# Pull the nvidia-python image
ujust apptainer pull -i nvidia-python -t stable
# Run interactive shell with GPU
ujust apptainer shell -i nvidia-python
# Execute a command
ujust apptainer exec -i nvidia-python -- python train.py
Apptainer Commands¶
| Command | Description | Recording |
|---|---|---|
ujust apptainer pull | Download image | |
ujust apptainer run | Run default command | |
ujust apptainer shell | Interactive shell | |
ujust apptainer exec | Execute command | |
ujust apptainer inspect | Show image info | |
ujust apptainer gpu | GPU detection |
Pre-installed Libraries¶
| Category | Libraries |
|---|---|
| Deep Learning | PyTorch, Transformers, Accelerate |
| Scientific | NumPy, SciPy, Pandas |
| Visualization | Matplotlib, Seaborn, Plotly |
| ML/Data | Scikit-learn, Datasets |
| Training | trl, peft, bitsandbytes |
GPU Verification¶
Inside the container:
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"GPU count: {torch.cuda.device_count()}")
print(f"GPU name: {torch.cuda.get_device_name(0)}")
# Quick benchmark
x = torch.randn(10000, 10000, device='cuda')
y = torch.matmul(x, x)
print("Matrix multiply: success")
Training Example¶
# Enter the container
ujust apptainer shell -i nvidia-python
# Inside container - run training
python -c "
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model = AutoModelForCausalLM.from_pretrained('gpt2')
tokenizer = AutoTokenizer.from_pretrained('gpt2')
print('Model loaded successfully')
print(f'Parameters: {model.num_parameters():,}')
"
Building Custom Images¶
For custom environments, use ujust apptainer build:
See Also¶
- Apptainer Command Reference - All commands and flags
- Apptainer Recordings - Watch command demos
- JupyterLab - Interactive notebooks with this environment
- GPU Setup - GPU configuration