Skip to content

Ollama Commands

Local LLM server with GPU acceleration

Command Count

This service has 11 command demonstrations.

Configure Ollama

ujust ollama config --instance=10000 --port=21434 --bind=127.0.0.1 --gpu-type=auto

Restart Server

ujust ollama restart --instance=10000

Start Server

ujust ollama start --instance=10000

Check Status

ujust ollama status --instance=10000

View Logs

ujust ollama logs --instance=10000 --lines=30

List Models

ujust ollama list --instance=10000

Pull Model

ujust ollama pull --instance=10000 --model=qwen3:0.6b

Run Model

ujust ollama run --instance=10000 --model=qwen3:0.6b --prompt='Say hello in one sentence'

Shell Command

ujust ollama shell --instance=10000 -- ls -la /home/jovian/.ollama

Stop Server

ujust ollama stop --instance=10000

Delete Instance

ujust ollama delete --instance=10000