Skip to content

LLMs on Supercomputers

Comprehensive course covering prompt engineering, RAG, and fine-tuning. Adapted from TU Wien's AI Factory Austria training materials.

Collection Statistics

Total Notebooks: 15

D0 - Setup

Bazzite AI environment setup and configuration

# Notebook Description
1 Bazzite-AI Environment Setup

D1 - Prompt Engineering Essentials

LangChain basics, prompt templates, chaining, evaluation, and optimization

# Notebook Description
1 Prompt Engineering Essentials
2 Prompt templates and parsing
3 Chaining
4 LLM Evaluation with evidently.ai
5 LLM as a Judge with evidently.ai
6 Prompt Optimization with Evidently

D2 - Retrieval Augmented Generation

RAG fundamentals with basic tools and ChromaDB vector databases

# Notebook Description
1 RAG Introduction with Ollama OpenAI API
2 RAG with LangChain and ChromaDB using Ollama

D3 - Fine-tuning on One GPU

Transformer architecture, PyTorch/HuggingFace fine-tuning, quantization, PEFT, and Unsloth

# Notebook Description
1 Transformer Anatomy
2 Fine-tuning an LLM with plain PyTorch
3 Fine-tuning an LLM with Hugging Face Trainer
4 Quantization
5 PEFT
6 Once Unsloth updates to support vLLM 0.14.x, enable fast_inference: