Ollama
Run LLM tests locally — no API key, no cost.
Install and pull a model
pip install "assertllm[ollama]"
ollama pull llama3.2
ollama serveWrite a test
@llm_test(
expect.is_not_empty(),
provider="ollama",
model="llama3.2",
)
def test_local(llm):
output = llm("What is 2+2?")
assert "4" in output.contentSupported Models
Any model in your local Ollama: llama3.2, mistral, codellama, phi3, gemma2, deepseek-r1
ollama list # see available modelsCustom Host
export OLLAMA_HOST=http://my-server:11434When to Use
Ollama is ideal for local development, CI/CD without API keys, privacy-sensitive workloads, and offline testing.
- Local development — no API costs, no rate limits
- CI/CD — run tests without API keys
- Privacy — data never leaves your machine
- Offline — no internet needed
Last updated on