Managing conversation context with result.messages and history processors - trim, summarize, filter, and serialize messages across sessions.
Every agent run produces a list of ModelMessage objects. Passing these back into the next run continues the conversation, giving the model full context. History processors let you transform, trim, summarize, or filter messages before each turn - keeping context windows manageable without losing important history.
Use result.messages as the messageHistory for the next run:
import { Agent } from "@vibesjs/sdk";import { anthropic } from "@ai-sdk/anthropic";const agent = new Agent({ model: anthropic("claude-sonnet-4-6"), systemPrompt: "You are a helpful assistant.",});const first = await agent.run("My name is Alice.");const second = await agent.run("What is my name?", { messageHistory: first.messages, // continue the conversation});console.log(second.output); // "Your name is Alice."console.log(second.newMessages); // only messages added in this run
result.messages contains the full conversation history (including prior turns). result.newMessages contains only the messages added during this specific run - useful for storing incremental updates.
To persist conversation history across sessions, serialize messages to JSON and restore them later.
import { serializeMessages, deserializeMessages } from "@vibesjs/sdk";// After a run - store to DB or fileconst json = serializeMessages(result.messages);await db.save("session-123", json);// On next session - restore for continuationconst stored = await db.load("session-123");const messages = deserializeMessages(stored);const next = await agent.run("What did we discuss?", { messageHistory: messages,});
serializeMessages encodes ModelMessage[] to a JSON string. deserializeMessages is the inverse: it parses and validates the JSON, returning a typed ModelMessage[].
historyProcessors run before each turn and receive the accumulated message history. They return a (possibly modified) message list that is sent to the model. Use them to keep context windows affordable without losing important history.
const agent = new Agent({ model, historyProcessors: [trimHistoryProcessor(20)],});
Multiple processors are applied in order, each receiving the output of the previous.
Summarize messages that exceed a threshold, replacing them with a condensed summary message. Preserves semantic content while reducing token count.
import { summarizeHistoryProcessor } from "@vibesjs/sdk";const agent = new Agent({ model, historyProcessors: [ summarizeHistoryProcessor(model, { maxMessages: 10, // default: 20 - summarize when history exceeds this summarizePrompt: "Summarize the conversation so far:", // optional custom prompt }), ],});
The second argument accepts { maxMessages?: number; summarizePrompt?: string }. When the non-system message count exceeds maxMessages, the older portion is summarized and replaced with a single summary message. The most recent floor(maxMessages / 2) messages are always kept verbatim. System messages are always preserved.