Fast Inference Test: Llama-3.2-1B¶
Tests fast_inference=True with vLLM backend on Llama-3.2-1B-Instruct.
Important: This notebook includes a kernel shutdown cell at the end. vLLM does not release GPU memory in single-process mode (Jupyter), so kernel restart is required between different model tests.
In [ ]:
Copied!
# Environment Setup
import os
from dotenv import load_dotenv
load_dotenv()
import unsloth
from unsloth import FastLanguageModel
import vllm
import torch
# Environment summary
gpu = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU"
print(f"Environment: unsloth {unsloth.__version__}, vLLM {vllm.__version__}, {gpu}")
print(f"HF_TOKEN loaded: {'Yes' if os.environ.get('HF_TOKEN') else 'No'}")
# Environment Setup import os from dotenv import load_dotenv load_dotenv() import unsloth from unsloth import FastLanguageModel import vllm import torch # Environment summary gpu = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU" print(f"Environment: unsloth {unsloth.__version__}, vLLM {vllm.__version__}, {gpu}") print(f"HF_TOKEN loaded: {'Yes' if os.environ.get('HF_TOKEN') else 'No'}")
In [ ]:
Copied!
# Test Llama-3.2-1B with fast_inference=True
MODEL_NAME = "unsloth/Llama-3.2-1B-Instruct"
print(f"\nTesting {MODEL_NAME.split('/')[-1]} with fast_inference=True...")
from vllm import SamplingParams
import time
model, tokenizer = FastLanguageModel.from_pretrained(
MODEL_NAME,
max_seq_length=512,
load_in_4bit=True,
fast_inference=True,
gpu_memory_utilization=0.5,
)
# Test generation
FastLanguageModel.for_inference(model)
messages = [{"role": "user", "content": "Say hello in one word."}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
sampling_params = SamplingParams(temperature=0.1, max_tokens=10)
start = time.time()
outputs = model.fast_generate([prompt], sampling_params=sampling_params)
elapsed = time.time() - start
# Clear result
print(f"\n{'='*60}")
print(f"Model: {MODEL_NAME}")
print(f"FastInference: ✅ SUPPORTED")
print(f"Generation: {elapsed:.2f}s")
print(f"{'='*60}")
# Test Llama-3.2-1B with fast_inference=True MODEL_NAME = "unsloth/Llama-3.2-1B-Instruct" print(f"\nTesting {MODEL_NAME.split('/')[-1]} with fast_inference=True...") from vllm import SamplingParams import time model, tokenizer = FastLanguageModel.from_pretrained( MODEL_NAME, max_seq_length=512, load_in_4bit=True, fast_inference=True, gpu_memory_utilization=0.5, ) # Test generation FastLanguageModel.for_inference(model) messages = [{"role": "user", "content": "Say hello in one word."}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) sampling_params = SamplingParams(temperature=0.1, max_tokens=10) start = time.time() outputs = model.fast_generate([prompt], sampling_params=sampling_params) elapsed = time.time() - start # Clear result print(f"\n{'='*60}") print(f"Model: {MODEL_NAME}") print(f"FastInference: ✅ SUPPORTED") print(f"Generation: {elapsed:.2f}s") print(f"{'='*60}")
Test Complete¶
The Llama-3.2-1B fast_inference test has completed. The kernel will now shut down to release all GPU memory.
Next: Run 02_FastInference_Qwen.ipynb for Qwen3-4B testing.
In [3]:
Copied!
# Shutdown kernel to release all GPU memory
import IPython
print("Shutting down kernel to release GPU memory...")
app = IPython.Application.instance()
app.kernel.do_shutdown(restart=False)
# Shutdown kernel to release all GPU memory import IPython print("Shutting down kernel to release GPU memory...") app = IPython.Application.instance() app.kernel.do_shutdown(restart=False)
Shutting down kernel to release GPU memory...
Out[3]:
{'status': 'ok', 'restart': False}