Skip to Content
DocumentationQuick Start

Quick Start

Install assertllm

Terminal
pip install "assertllm[openai]" export OPENAI_API_KEY=sk-...

Write a test

test_my_llm.py
from assertllm import expect, llm_test @llm_test( expect.is_not_empty(), expect.contains("hello", case_sensitive=False), expect.latency_under(3000), model="gpt-4o-mini", system_prompt="You are a friendly bot.", ) def test_greeting(llm): llm("Say hello")

Run it

Terminal
pytest test_my_llm.py -v
Output
test_my_llm.py::test_greeting "Hello! How can I help you today?" ✓ is_not_empty() ✓ contains("hello", case_sensitive=False) ✓ latency_under(3000) — 612ms PASSED [0.6s] ────────── assertllm summary ────────── LLM tests: 1 passed Assertions: 3/3 passed Total cost: $0.000012 Avg latency: 612ms

Using the fixture (no decorator)

test_fixture.py
def test_with_fixture(llm): output = llm("Say hello", model="gpt-4o-mini") assert "hello" in output.content.lower() assert output.latency_ms < 5000 assert output.cost_estimate_usd < 0.01

The llm fixture is automatically available — no imports needed. Just add llm as a parameter.

LLMOutput

Every llm() call returns an LLMOutput object:

FieldTypeDescription
contentstrThe text response
modelstrModel used
latency_msfloatResponse time in milliseconds
input_tokensintPrompt tokens
output_tokensintCompletion tokens
cost_estimate_usdfloatEstimated cost
tool_callslist[dict]Tool/function calls made
rawdictThe original provider response
Last updated on