Skip to content

Ollama Dev Container

LLM inference engine with GPU acceleration

Auto-Generated Documentation

This page is automatically generated from devcontainer/ollama/ configurations.

Runtime Matrix

Runtime GPU Image Port
Docker CPU - ghcr.io/atrawog/bazzite-ai-pod-ollama:stable 11434
Docker NVIDIA ghcr.io/atrawog/bazzite-ai-pod-ollama:stable 11434
Podman CPU - ghcr.io/atrawog/bazzite-ai-pod-ollama:stable 11434
Podman NVIDIA ghcr.io/atrawog/bazzite-ai-pod-ollama:stable 11434

VS Code Extensions

  • anthropic.claude-code - Claude Code integration
  • ms-azuretools.vscode-docker - Docker support
  • ms-python.python - Python language support
  • ms-vscode.cpptools - C/C++ support

Configuration Details

Mounts

source=${localWorkspaceFolder},target=/workspace,type=bind

Remote User: jovian

Lifecycle Commands

postCreateCommand:

echo '✓ Ollama ready (Docker CPU)'

postStartCommand:

echo 'Ollama API available at: http://localhost:11434'

Quick Start

For detailed setup instructions, see the Dev Containers deployment guide.