Skip to main content
This page covers patterns that work well when building products with Persona, along with common pitfalls to avoid.

What to Ingest

Not everything deserves to be a memory. The goal is signal, not noise. Good candidates include user messages, AI responses (which give context for future queries), significant actions like completing a task or making a purchase, and explicit preferences stated by the user. You should generally skip system messages, error logs, trivial confirmations like “ok” or “thanks,” and duplicate content that has already been ingested. The question to ask is: would a human remember this? If the answer is no, it probably doesn’t belong in the memory graph.

Understanding Sessions

A session in Persona is a flexible concept. It represents a logical grouping of content that becomes an Episode when ingested. How you define sessions depends entirely on your product. A session could be a single chat conversation, or it could be multiple chats combined over a period. It could be time-based—morning, afternoon, evening—or day-by-day. It could be triggered after a token threshold is reached, batching content until a certain size. It could be attached to a specific product feature or workflow, grouping all interactions within that context. Sessions don’t have to come from chat at all. Data from different adapters—API logs, wearable devices, email, calendar events—can each be designed as separate sessions that form individual episodes with their datetime recorded. The key is that each session, regardless of source, becomes an episode with temporal metadata that allows Persona to understand when it happened and how it relates to other episodes in the user’s history. When no explicit session is provided, Persona auto-groups based on time gaps. But explicit session design gives you more control over narrative structure.

Conversation Design

The most important principle is: don’t interrogate. Users hate being asked a series of direct questions like “What are your hobbies? What time do you wake up? Do you prefer phone or email?” This feels like filling out a form. Let patterns emerge instead. Through natural conversation over time, you’ll observe that the user mentions hiking, so they’re an outdoor enthusiast. They’re always on calls before 9am, so they’re an early riser. They complain about email, so they prefer async communication. Persona learns from organic interaction. You don’t need to force extraction.

Personalization Patterns

The simplest pattern is proactive context. When a user opens your app, pull their active goals and psyche traits. Your AI starts each interaction with awareness of who this person is and what they’re working on. The system prompt includes retrieved context, so responses feel personalized from the first message. Adaptive tone uses psyche traits to adjust communication style. If the user prefers casual communication, use contractions and a friendly voice. If they prefer formal, structured responses, adjust accordingly. This can be queried explicitly using the Ask endpoint with an output schema for communication preferences. Goal tracking keeps the system aligned with user intent. When users complete something, ingest that update. The goal gets marked naturally, and future retrievals will surface the updated status. The AI always knows what the user is trying to achieve and how far along they are. Context carry-over references previous episodes naturally. Without Persona, an AI asked “How’s the project going?” would have no context. With Persona, the AI can say “Last we talked, you were debugging that auth issue. Did the fix you mentioned on Tuesday work out?” This is the difference between a tool and a companion.

Anti-Patterns

Over-ingesting is the most common mistake. If you ingest every mouse click, every page view, every keystroke, you end up with graph bloat, slow retrieval, and noisy context that dilutes the signal. Filter for meaningful interactions. Ignoring timestamps breaks temporal reasoning. When importing historical data without timestamps, all episodes cluster at “now” and temporal chains become meaningless. Always include timestamps for historical data. Static prompts waste the memory you’ve built. If you use the same system prompt every time without including retrieved context, you get generic responses despite rich memory. Dynamically include context in every LLM call. Psyche overextraction creates contradictions. If every message generates psyche items, you’ll end up with conflicting preferences and analysis paralysis. Trust the LLM to extract only when genuine identity content exists.

The Golden Rule

Memory should feel natural, not mechanical. Your users shouldn’t feel like they’re filling out forms. They shouldn’t notice the memory system at all. They should just notice that your app gets them in a way others don’t. That’s the goal. Build toward that.