RLOO Training Test: Qwen3-4B¶
Tests Reinforcement Learning with Leave-One-Out (RLOO) optimization with Unsloth on Qwen3-4B.
Key features tested:
- FastLanguageModel loading with 4-bit quantization
- LoRA adapter configuration
- RLOOTrainer with synthetic reward function
- Post-training inference verification
RLOO Overview: RLOO uses leave-one-out baseline estimation for variance reduction in policy gradients. For each completion, the baseline is computed as the mean reward of all other completions, providing more stable training than single-sample estimates.
Important: This notebook includes a kernel shutdown cell at the end to release all GPU memory.
In [1]:
Copied!
# Environment Setup
import os
# FIX: Set ACCELERATE_MIXED_PRECISION BEFORE importing unsloth
# This ensures autocast dtype matches model dtype (bfloat16)
os.environ['ACCELERATE_MIXED_PRECISION'] = 'bf16'
from dotenv import load_dotenv
load_dotenv()
# Force text-based progress instead of HTML widgets
os.environ["TQDM_NOTEBOOK"] = "false"
# CRITICAL: Import unsloth FIRST for proper TRL patching
import unsloth
from unsloth import FastLanguageModel, is_bf16_supported
import torch
from trl import RLOOConfig, RLOOTrainer
from datasets import Dataset
# Environment summary
gpu = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU"
print(f"Environment: unsloth {unsloth.__version__}, PyTorch {torch.__version__}, {gpu}")
print(f"ACCELERATE_MIXED_PRECISION: {os.environ.get('ACCELERATE_MIXED_PRECISION', 'not set')}")
print(f"HF_TOKEN loaded: {'Yes' if os.environ.get('HF_TOKEN') else 'No'}")
# Environment Setup import os # FIX: Set ACCELERATE_MIXED_PRECISION BEFORE importing unsloth # This ensures autocast dtype matches model dtype (bfloat16) os.environ['ACCELERATE_MIXED_PRECISION'] = 'bf16' from dotenv import load_dotenv load_dotenv() # Force text-based progress instead of HTML widgets os.environ["TQDM_NOTEBOOK"] = "false" # CRITICAL: Import unsloth FIRST for proper TRL patching import unsloth from unsloth import FastLanguageModel, is_bf16_supported import torch from trl import RLOOConfig, RLOOTrainer from datasets import Dataset # Environment summary gpu = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU" print(f"Environment: unsloth {unsloth.__version__}, PyTorch {torch.__version__}, {gpu}") print(f"ACCELERATE_MIXED_PRECISION: {os.environ.get('ACCELERATE_MIXED_PRECISION', 'not set')}") print(f"HF_TOKEN loaded: {'Yes' if os.environ.get('HF_TOKEN') else 'No'}")
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
/opt/pixi/.pixi/envs/default/lib/python3.13/site-packages/trl/__init__.py:203: UserWarning: TRL currently supports vLLM versions: 0.10.2, 0.11.0, 0.11.1, 0.11.2. You have version 0.14.0rc1.dev201+gadcf682fc.cu130 installed. We recommend installing a supported version to avoid compatibility issues. if is_vllm_available():
🦥 Unsloth Zoo will now patch everything to make training faster!Environment: unsloth 2025.12.10, PyTorch 2.9.1+cu130, NVIDIA GeForce RTX 4080 SUPER ACCELERATE_MIXED_PRECISION: bf16 HF_TOKEN loaded: Yes
In [2]:
Copied!
# Load Qwen3-4B with 4-bit quantization
MODEL_NAME = "unsloth/Qwen3-4B-unsloth-bnb-4bit"
print(f"\nLoading {MODEL_NAME.split('/')[-1]}...")
model, tokenizer = FastLanguageModel.from_pretrained(
MODEL_NAME,
max_seq_length=512,
load_in_4bit=True,
dtype=None, # Auto-detect
)
# Ensure pad token is set
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
print(f"Model loaded: {type(model).__name__}")
# Load Qwen3-4B with 4-bit quantization MODEL_NAME = "unsloth/Qwen3-4B-unsloth-bnb-4bit" print(f"\nLoading {MODEL_NAME.split('/')[-1]}...") model, tokenizer = FastLanguageModel.from_pretrained( MODEL_NAME, max_seq_length=512, load_in_4bit=True, dtype=None, # Auto-detect ) # Ensure pad token is set if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id print(f"Model loaded: {type(model).__name__}")
Loading Qwen3-4B-unsloth-bnb-4bit...==((====))== Unsloth 2025.12.10: Fast Qwen3 patching. Transformers: 5.0.0.1. vLLM: 0.14.0rc1.dev201+gadcf682fc.cu130. \\ /| NVIDIA GeForce RTX 4080 SUPER. Num GPUs = 1. Max memory: 15.568 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.9.1+cu130. CUDA: 8.9. CUDA Toolkit: 13.0. Triton: 3.5.1 \ / Bfloat16 = TRUE. FA [Xformers = 0.0.33.post2. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Loading weights: 0%| | 0/398 [00:00<?, ?it/s]
Model loaded: Qwen3ForCausalLM
In [3]:
Copied!
# Apply LoRA adapters for RLOO training
model = FastLanguageModel.get_peft_model(
model,
r=16,
lora_alpha=16,
lora_dropout=0,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
bias="none",
use_gradient_checkpointing="unsloth",
random_state=42,
)
trainable = sum(p.numel() for p in model.parameters() if p.requires_grad)
total = sum(p.numel() for p in model.parameters())
print(f"LoRA applied: {trainable:,} trainable / {total:,} total ({100*trainable/total:.2f}%)")
# Apply LoRA adapters for RLOO training model = FastLanguageModel.get_peft_model( model, r=16, lora_alpha=16, lora_dropout=0, target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], bias="none", use_gradient_checkpointing="unsloth", random_state=42, ) trainable = sum(p.numel() for p in model.parameters() if p.requires_grad) total = sum(p.numel() for p in model.parameters()) print(f"LoRA applied: {trainable:,} trainable / {total:,} total ({100*trainable/total:.2f}%)")
Unsloth 2025.12.10 patched 36 layers with 36 QKV layers, 36 O layers and 36 MLP layers.
LoRA applied: 33,030,144 trainable / 2,541,616,640 total (1.30%)
In [4]:
Copied!
# Create minimal synthetic prompt dataset for RLOO (5 prompts)
# RLOO requires prompts only - completions are generated during training
prompts = [
"Explain the concept of recursion in programming.",
"What are the benefits of using version control?",
"Describe how a hash table works.",
"What is the difference between a stack and a queue?",
"Explain what an API is to a beginner.",
]
# Format prompts for RLOO (requires "prompt" field)
dataset = Dataset.from_dict({
"prompt": [
tokenizer.apply_chat_template(
[{"role": "user", "content": p}],
tokenize=False,
add_generation_prompt=True
) for p in prompts
]
})
print(f"Dataset created: {len(dataset)} prompts")
print(f"Sample prompt:\n{dataset[0]['prompt'][:150]}...")
# Create minimal synthetic prompt dataset for RLOO (5 prompts) # RLOO requires prompts only - completions are generated during training prompts = [ "Explain the concept of recursion in programming.", "What are the benefits of using version control?", "Describe how a hash table works.", "What is the difference between a stack and a queue?", "Explain what an API is to a beginner.", ] # Format prompts for RLOO (requires "prompt" field) dataset = Dataset.from_dict({ "prompt": [ tokenizer.apply_chat_template( [{"role": "user", "content": p}], tokenize=False, add_generation_prompt=True ) for p in prompts ] }) print(f"Dataset created: {len(dataset)} prompts") print(f"Sample prompt:\n{dataset[0]['prompt'][:150]}...")
Dataset created: 5 prompts Sample prompt: <|im_start|>user Explain the concept of recursion in programming.<|im_end|> <|im_start|>assistant ...
In [5]:
Copied!
# Define a simple reward function for testing
# In production, this would be a learned reward model
def simple_reward_fn(completions, prompts=None, **kwargs):
"""
Simple reward function for testing RLOO.
Rewards informative, well-structured responses.
"""
rewards = []
for completion in completions:
length = len(completion.split())
score = 0.0
# Prefer medium-length responses
if 10 <= length <= 50:
score += 1.0
elif length < 10:
score -= 0.5
# Prefer complete sentences
if completion.strip().endswith("."):
score += 0.5
rewards.append(score)
return rewards
print("Reward function defined: simple_reward_fn")
# Define a simple reward function for testing # In production, this would be a learned reward model def simple_reward_fn(completions, prompts=None, **kwargs): """ Simple reward function for testing RLOO. Rewards informative, well-structured responses. """ rewards = [] for completion in completions: length = len(completion.split()) score = 0.0 # Prefer medium-length responses if 10 <= length <= 50: score += 1.0 elif length < 10: score -= 0.5 # Prefer complete sentences if completion.strip().endswith("."): score += 0.5 rewards.append(score) return rewards print("Reward function defined: simple_reward_fn")
Reward function defined: simple_reward_fn
In [ ]:
Copied!
# RLOO Training Configuration (minimal steps for testing)
rloo_config = RLOOConfig(
output_dir="outputs_rloo_qwen_test",
per_device_train_batch_size=4, # Must match num_generations
gradient_accumulation_steps=1,
max_steps=2, # Minimal steps for testing
warmup_steps=0,
learning_rate=1e-5, # Lower LR for RL
logging_steps=1,
fp16=not is_bf16_supported(),
bf16=is_bf16_supported(),
optim="adamw_8bit",
num_generations=4, # Completions per prompt for leave-one-out
max_completion_length=64,
beta=0.05, # KL penalty
seed=42,
)
# Initialize RLOO Trainer
trainer = RLOOTrainer(
model=model,
args=rloo_config,
train_dataset=dataset,
processing_class=tokenizer,
reward_funcs=simple_reward_fn, # Fixed: reward_funcs not reward_model
)
print("Starting RLOO training (2 steps)...")
trainer_stats = trainer.train()
print(f"RLOO training completed!")
# RLOO Training Configuration (minimal steps for testing) rloo_config = RLOOConfig( output_dir="outputs_rloo_qwen_test", per_device_train_batch_size=4, # Must match num_generations gradient_accumulation_steps=1, max_steps=2, # Minimal steps for testing warmup_steps=0, learning_rate=1e-5, # Lower LR for RL logging_steps=1, fp16=not is_bf16_supported(), bf16=is_bf16_supported(), optim="adamw_8bit", num_generations=4, # Completions per prompt for leave-one-out max_completion_length=64, beta=0.05, # KL penalty seed=42, ) # Initialize RLOO Trainer trainer = RLOOTrainer( model=model, args=rloo_config, train_dataset=dataset, processing_class=tokenizer, reward_funcs=simple_reward_fn, # Fixed: reward_funcs not reward_model ) print("Starting RLOO training (2 steps)...") trainer_stats = trainer.train() print(f"RLOO training completed!")
In [8]:
Copied!
# Post-training inference test
FastLanguageModel.for_inference(model)
test_prompt = "What is machine learning?"
messages = [{"role": "user", "content": test_prompt}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("=" * 60)
print("RLOO Training Pipeline Test PASSED")
print("=" * 60)
print(f"Sample generation:\n{response[-200:]}")
# Post-training inference test FastLanguageModel.for_inference(model) test_prompt = "What is machine learning?" messages = [{"role": "user", "content": test_prompt}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to("cuda") with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=64, temperature=0.7, top_p=0.9, do_sample=True, pad_token_id=tokenizer.pad_token_id, ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print("=" * 60) print("RLOO Training Pipeline Test PASSED") print("=" * 60) print(f"Sample generation:\n{response[-200:]}")
============================================================ RLOO Training Pipeline Test PASSED ============================================================ Sample generation: way. Let me start by recalling the basic definition. Machine learning is a subset of artificial intelligence where systems learn from data. But I should break it down further. First, I should mention
Test Complete¶
The RLOO Training Pipeline test has completed successfully. The kernel will now shut down to release all GPU memory.
What Was Verified¶
- FastLanguageModel loading with 4-bit quantization (Qwen3-4B)
- LoRA adapter configuration for RL training
- Synthetic prompt dataset creation
- Simple reward function integration
- RLOOTrainer training loop (2 steps)
- Post-training inference generation
RLOO Concepts Demonstrated¶
- Leave-One-Out Baseline: Each completion's baseline is mean of other K-1 rewards
- Variance Reduction: More stable gradients than single-sample estimates
- KL Penalty: Prevents policy from diverging too far from reference
Ready for Production¶
If this test passed, your environment is ready for:
- RLOO training with learned reward models
- RLHF pipelines with stable optimization
- Policy refinement workflows
In [ ]:
Copied!
# Shutdown kernel to release all GPU memory
import IPython
print("Shutting down kernel to release GPU memory...")
app = IPython.Application.instance()
app.kernel.do_shutdown(restart=False)
# Shutdown kernel to release all GPU memory import IPython print("Shutting down kernel to release GPU memory...") app = IPython.Application.instance() app.kernel.do_shutdown(restart=False)