Quick Start
Install assertllm
Terminal
pip install "assertllm[openai]"
export OPENAI_API_KEY=sk-...Write a test
test_my_llm.py
from assertllm import expect, llm_test
@llm_test(
expect.is_not_empty(),
expect.contains("hello", case_sensitive=False),
expect.latency_under(3000),
model="gpt-4o-mini",
system_prompt="You are a friendly bot.",
)
def test_greeting(llm):
llm("Say hello")Run it
Terminal
pytest test_my_llm.py -vOutput
test_my_llm.py::test_greeting
"Hello! How can I help you today?"
✓ is_not_empty()
✓ contains("hello", case_sensitive=False)
✓ latency_under(3000) — 612ms
PASSED [0.6s]
────────── assertllm summary ──────────
LLM tests: 1 passed
Assertions: 3/3 passed
Total cost: $0.000012
Avg latency: 612msUsing the fixture (no decorator)
test_fixture.py
def test_with_fixture(llm):
output = llm("Say hello", model="gpt-4o-mini")
assert "hello" in output.content.lower()
assert output.latency_ms < 5000
assert output.cost_estimate_usd < 0.01The llm fixture is automatically available — no imports needed. Just add llm as a parameter.
LLMOutput
Every llm() call returns an LLMOutput object:
| Field | Type | Description |
|---|---|---|
content | str | The text response |
model | str | Model used |
latency_ms | float | Response time in milliseconds |
input_tokens | int | Prompt tokens |
output_tokens | int | Completion tokens |
cost_estimate_usd | float | Estimated cost |
tool_calls | list[dict] | Tool/function calls made |
raw | dict | The original provider response |
Last updated on