Skip to Content

Ollama

Run LLM tests locally — no API key, no cost.

Install and pull a model

pip install "assertllm[ollama]" ollama pull llama3.2 ollama serve

Write a test

@llm_test( expect.is_not_empty(), provider="ollama", model="llama3.2", ) def test_local(llm): output = llm("What is 2+2?") assert "4" in output.content

Supported Models

Any model in your local Ollama: llama3.2, mistral, codellama, phi3, gemma2, deepseek-r1

ollama list # see available models

Custom Host

export OLLAMA_HOST=http://my-server:11434

When to Use

Ollama is ideal for local development, CI/CD without API keys, privacy-sensitive workloads, and offline testing.

  • Local development — no API costs, no rate limits
  • CI/CD — run tests without API keys
  • Privacy — data never leaves your machine
  • Offline — no internet needed
Last updated on