Documentation Index
Fetch the complete documentation index at: https://docs.buildpersona.ai/llms.txt
Use this file to discover all available pages before exploring further.
Build & Development
# Install dependencies
poetry install
# Run API locally
poetry run uvicorn server.main:app --reload
# Run with Docker (recommended)
docker compose up -d
# Run tests
docker compose run --rm test # Docker (preferred)
poetry run pytest tests/unit -v # Local unit tests only
Coding Conventions
- Python 3.12, PEP 8, 4-space indentation
- Type hints on all function signatures
- Naming:
snake_case (modules/functions), PascalCase (classes)
Evaluation Framework
Persona includes a comprehensive evaluation framework for testing long-term memory systems against academic benchmarks.
Supported Benchmarks
| Benchmark | What it Tests | Reference |
|---|
| LongMemEval | Temporal logic, multi-session aggregation | ICLR 2025 |
| PersonaMem | Factual precision, personalization | COLM 2025 |
Running Evaluations
# Install dependencies
poetry install
# Download datasets
poetry run python evals/scripts/download_personamem.py
# Quick test (15 questions)
poetry run python -m evals.cli run \
--benchmark longmemeval \
--samples 5 \
--seed 42
# Full evaluation (340 questions)
poetry run python -m evals.cli run \
--config evals/configs/full_eval.yaml \
--golden-set
Analyzing Results
# Summary report
poetry run python -m evals.cli analyze run_20241221_143052 --summary
# Filter by question type
poetry run python -m evals.cli analyze run_20241221_143052 --type multi-session
License
MIT License. See LICENSE for details.