Skip to main content
Persona supports multiple LLM providers. The format is provider/model.

Supported Providers

ProviderFormatExample
OpenAIopenai/{model}openai/gpt-5.2
Azure OpenAIazure/{deployment}azure/gpt-4o
Azure AI Foundryfoundry/{model}foundry/gpt-5.2
Anthropicanthropic/{model}anthropic/claude-3-opus
Geminigemini/{model}gemini/gemini-pro

Configuration

Set your provider in .env:
LLM_SERVICE=openai/gpt-5.2
EMBEDDING_SERVICE=openai/text-embedding-3-small
See .env.example for the full reference. The easiest setup for new users:
LLM_SERVICE=openai/gpt-5.2
EMBEDDING_SERVICE=openai/text-embedding-3-small
OPENAI_API_KEY=sk-your-key-here

Azure AI Foundry

For enterprise/production deployments:
LLM_SERVICE=foundry/gpt-5.2
AZURE_API_KEY=your-azure-key
AZURE_API_BASE=https://your-resource.openai.azure.com
AZURE_CHAT_DEPLOYMENT=gpt-5.2
AZURE_EMBEDDING_DEPLOYMENT=text-embedding-3-small

Neo4j Database

The database URI depends on your environment:
# Running inside Docker (docker compose up):
URI_NEO4J=bolt://neo4j:7687

# Running locally (poetry run python ...):
URI_NEO4J=bolt://localhost:7687

USER_NEO4J=neo4j
PASSWORD_NEO4J=password
NEO4J_AUTH=neo4j/password

Performance Tuning

# Parallel session ingestion (for multi-session batch ingestion)
INGEST_SESSION_CONCURRENCY=5

# Eval judge model (for evals only)
EVAL_JUDGE_MODEL=gpt-5-mini

Embedding Fallback

If your primary provider doesn’t support embeddings, Persona falls back to OpenAI’s embedding API. Ensure OPENAI_API_KEY is set even when using other providers.