Skip to main content
This document tracks every Pydantic AI feature and its status in the TypeScript framework. Use it as a backlog when deciding what to port next. Vibes is designed to stay current with Pydantic AI - an AI agent automatically tracks new releases and ports relevant changes. See auto-updates for details. Legend: ✅ Ported · 🚧 Partial · ❌ Not ported

Agent API

FeaturePydantic AIStatusDocsNotes
agent.run()agent.run(prompt, deps=x)Agentsagent.run(prompt, { deps: x })
agent.run_stream()agent.run_stream(prompt)Streamingagent.stream(prompt)
Agent nameagent.nameAgentsname on AgentOptions
System prompt (static)system_prompt="..."AgentssystemPrompt: "..."
System prompt (dynamic)@agent.system_prompt decoratorAgentsagent.addSystemPrompt(fn) or systemPrompt: [fn]
Tools@agent.tool / tools=[...]Toolsagent.addTool(tool({...}))
Structured outputresult_type: BaseModelStructured OutputoutputSchema: z.object({...})
Result validators@agent.result_validatorResult Validatorsagent.addResultValidator(fn)
Max retriesmax_retries / max_result_retriesAgentsmaxRetries on AgentOptions
Max turnsmax_turnsAgentsmaxTurns on AgentOptions
Message historymessage_history=Message History{ messageHistory: [...] } on run()
Metadata taggingmetadata= on runAgents{ metadata: {...} } on run()/stream() - accessible via ctx.metadata
Agent.override()Context manager swapping model/deps/toolsetsTestingagent.override({ model, tools, ... }).run(prompt)
Event-stream runagent.run_stream_events()Streamingagent.runStreamEvents(prompt) - async iterable of typed AgentStreamEvent objects
End strategyend_strategyAgentsendStrategy: 'early' | 'exhaustive' on AgentOptions/RunOptions
Max concurrencymax_concurrencyAgentsmaxConcurrency on AgentOptions - semaphore-based cap on concurrent tool executions
instructions field@agent.instructions decoratorAgentsinstructions on AgentOptions/RunOptions; re-injected each turn, not stored in message history
Model-specific settingsmodel_settings= on run()ModelsmodelSettings: { temperature, maxTokens, ... } on AgentOptions or RunOptions
Sync runagent.run_sync()-Deno is async-native - not applicable
Node-level iterationagent.iter() / AgentRun-Not applicable - use runStreamEvents() for step-by-step observation in async TypeScript
Last run messagesagent.last_run_messages-Removed from Pydantic AI; superseded by result.newMessages

Tools

FeaturePydantic AIStatusDocsNotes
Tools with context@agent.toolToolstool({ execute: (ctx, args) => ... })
Tool maxRetriesretries= on @agent.toolToolsmaxRetries on ToolDefinition
Plain tools (no ctx)@agent.tool_plainToolsplainTool({ name, description, parameters, execute })
Tool prepare methodprepare= on Tool classToolsprepare: (ctx) => tool | null on ToolDefinition
args_validatorargs_validator= on toolToolsargsValidator: (args) => void on ToolDefinition
Tool.from_schema()Build tool from raw JSON schemaToolsfromSchema({ name, description, jsonSchema, execute })
Multi-modal returnsReturn images / audio / binary from toolsMulti-ModalBinaryContent / BinaryImage - returned from execute, auto-converted
UploadedFile supportUploadedFile for provider file uploadsMulti-ModalUploadedFile type + uploadedFileSchema for tool parameters
Tool result metadataAttach metadata keyed by tool_call_idToolsctx.attachMetadata(toolCallId, meta) - exposed on result.toolMetadata
Output functionsFinal-action tools (no model feedback loop)Structured OutputoutputTool({ ... }) - sets isOutput: true, ends run on call
Sequential executionsequential=True on toolToolssequential: true on ToolDefinition - acquires mutex before executing
Deferred toolsTools requiring human approval before executionHuman-in-the-LooprequiresApproval: true on ToolDefinition - see Deferred Tools section
MCP server toolsConnect external MCP servers as tool providersMCPMCPToolset wraps any MCPClient - see MCP section
Docstring extractionAuto-doc from Python docstrings-No runtime equivalent in TypeScript - use description field explicitly

Toolsets

FeaturePydantic AIStatusDocsNotes
FunctionToolsetGroup locally defined function toolsToolsetsnew FunctionToolset([tool1, tool2])
CombinedToolsetMerge multiple toolsets into oneToolsetsnew CombinedToolset(ts1, ts2)
FilteredToolsetFilter a toolset based on contextToolsetsnew FilteredToolset(ts, (ctx) => boolean)
PrefixedToolsetAdd prefix to tool namesToolsetsnew PrefixedToolset(ts, "prefix_")
RenamedToolsetMap new names onto existing toolsToolsetsnew RenamedToolset(ts, { old: "new" })
Toolset reuseShare toolsets across agentsToolsetsToolset is a plain interface - pass the same instance to multiple agents
Runtime swapReplace toolsets during testingTestingagent.override({ toolsets: [...] }).run(prompt)
PreparedToolsetModify entire tool list before each stepToolsetsnew PreparedToolset(inner, (ctx, tools) => tools) - dynamic per-turn
ApprovalRequiredToolsetEnforce human approval on a toolsetHuman-in-the-Loopnew ApprovalRequiredToolset(inner) - all tools get requiresApproval
WrapperToolsetCustom execution behaviour around a toolsetToolsetsclass MyWrapper extends WrapperToolset { callTool(...) { ... } }
ExternalToolsetDeferred execution outside agent processHuman-in-the-Loopnew ExternalToolset([{ name, description, jsonSchema }]) - schema-only

Deferred tools (human-in-the-loop & external execution)

FeaturePydantic AIStatusDocsNotes
requires_approval=TrueMark a tool as approval-requiredHuman-in-the-LooprequiresApproval: true on ToolDefinition or tool() options
ApprovalRequired exceptionPause agent, surface pending calls to callerHuman-in-the-LoopApprovalRequiredError - catch it, inspect .requests, resume with results
DeferredToolRequestsContainer of pending tool calls needing approvalHuman-in-the-LoopDeferredToolRequests class with .requests array
DeferredToolResultsProvide approved (or overridden) resultsHuman-in-the-Loopagent.resume(deferred, { results: [...] }) or run(..., { deferredResults })
Argument override on resumeModify args during approval before executionHuman-in-the-LoopargsOverride field on DeferredToolResult
CallDeferred exceptionDefer a tool call to an external processHuman-in-the-LoopExternalToolset raises ApprovalRequiredError for all tools
ExternalToolsetAccept raw JSON schema tools for deferred callsToolsetsnew ExternalToolset([{ name, description, jsonSchema }])

Output & Structured Results

FeaturePydantic AIStatusDocsNotes
Single schema outputresult_type: BaseModelStructured OutputoutputSchema: z.object({...}) via final_result tool
Result validators@agent.result_validatorResult ValidatorsaddResultValidator(fn) - throw to retry
result.all_messages()Full message historyMessage Historyresult.messages (full) + result.newMessages (this run)
result.new_messages()Messages added in this run onlyMessage Historyresult.newMessages on RunResult and StreamResult
@agent.output_validatorValidate output post-parseResult ValidatorsCovered by addResultValidator
Union output typesoutput_type=[TypeA, TypeB]Structured OutputoutputSchema: [schemaA, schemaB] - registers final_result_0, _1
Native structured outputNativeOutput marker classStructured OutputoutputMode: 'native' - uses AI SDK Output.object() / JSON mode
Prompted output modePromptedOutput marker classStructured OutputoutputMode: 'prompted' - schema injected into system prompt
Streaming structured outputPartial validation as output streamsStreamingresult.partialOutput async iterable on StreamResult
Message serializationModelMessagesTypeAdapterMessage HistoryserializeMessages(msgs) / deserializeMessages(json)
Disable schema prompttemplate=False on output markerStructured OutputoutputTemplate: false on AgentOptions
BinaryImage outputGenerate images as output typeMulti-ModaloutputSchema: BINARY_IMAGE_OUTPUT - first tool result with image/* MIME type becomes the run output as BinaryContent

Message history

FeaturePydantic AIStatusDocsNotes
Pass history to next runmessage_history=result.all_messages()Message History{ messageHistory: result.messages }
new_messages()Slice of messages from current run onlyMessage Historyresult.newMessages on RunResult and StreamResult
Cross-model compatibilityMessages work across providersMessage HistoryAI SDK CoreMessage is provider-agnostic
History processorshistory_processors=[...]Message HistoryhistoryProcessors: [trimHistoryProcessor(n), ...] on AgentOptions
Message serializationJSON roundtrip via ModelMessagesTypeAdapterMessage HistoryserializeMessages() / deserializeMessages()
Token-aware trimmingKeep last N messages by token countMessage HistorytokenTrimHistoryProcessor(maxTokens, tokenCounter?)
LLM-based summarizationSummarize old turns via a model callMessage HistorysummarizeHistoryProcessor(model, { maxMessages?, summarizePrompt? })
Privacy filteringStrip sensitive fields before model callMessage HistoryprivacyFilterProcessor(rules) - regex + field-path redaction

Dependencies

FeaturePydantic AIStatusDocsNotes
Typed depsRunContext[MyDeps]DependenciesAgent<MyDeps, TOutput> - deps typed via generic parameter
Deps in toolsctx: RunContext[MyDeps] in toolDependenciesctx.deps in execute(ctx, args)
Deps in system prompts@agent.system_prompt with contextDependenciesagent.addSystemPrompt((ctx) => ...)
Deps in result validators@agent.result_validator with contextDependenciesagent.addResultValidator((ctx, result) => ...)
RunContext accessors.deps, .usage, .metadataDependenciesFull RunContext<TDeps> type with all accessors
Override deps in testsPass fake deps via agent.override()Testingagent.override({ deps: fakeDeps }).run(prompt)

Usage & Limits

FeaturePydantic AIStatusDocsNotes
Usage trackingresult.usage()Resultsresult.usage - prompt/completion tokens + requests
UsageLimitsCap request count, input tokens, output tokens, tool callsResultsusageLimits: { maxRequests, maxInputTokens, ... } on AgentOptions or run()

Errors

FeaturePydantic AIStatusDocsNotes
UserErrorRaised for bad agent configError Handling
UnexpectedModelBehaviorMalformed model responseError Handling
MaxRetriesExceededToo many tool/output retriesError Handling
MaxTurnsReachedHit maxTurns capError Handling
UsageLimitExceededHit a UsageLimits capResults
ApprovalRequiredErrorTool requires human approvalHuman-in-the-Loop
ModelRequestsDisabledErrorFired when setAllowModelRequests(false)Testing

MCP (Model Context Protocol)

FeaturePydantic AIStatusDocsNotes
MCPServerStdioSubprocess stdio transportMCPMCPStdioClient using @modelcontextprotocol/sdk
MCPServerStreamableHTTPHTTP Streamable transportMCPMCPHttpClient using StreamableHTTPClientTransport
MCPServerSSEServer-Sent Events transport (deprecated)-Prefer MCPHttpClient (StreamableHTTP)
Dynamic tool discoveryAuto-convert MCP tools to Pydantic AI toolsMCPMCPToolset.tools() fetches and converts MCP tools automatically
Elicitation supportMCP server can request structured inputMCPelicitationCallback option on MCPToolset
Server instructionsAccess MCP server instructions post-connectMCPmcpToolset.getServerInstructions()
Tool cachingCache discovered tools with invalidationMCPtoolCacheTtlMs option on MCPToolset (default 60 s)
Multi-server supportMount multiple MCP servers simultaneouslyMCPMCPManager - add servers, call .connect(), use as a Toolset
Config file loadingLoad MCP config with env variable referencesMCPloadMCPConfig(path) - supports ${ENV_VAR} interpolation

Testing

FeaturePydantic AIStatusDocsNotes
Mock modelMockLanguageModelV1 (from ai/test)TestingEquivalent to Pydantic AI’s TestModel
Multi-turn mockmockValues(...)TestingCycle through responses across turns
Stream mockconvertArrayToReadableStreamTestingBuild mock stream chunks
Agent.override()Swap model/deps/toolsets in tests without modifying app codeTestingagent.override({ model: mockModel }).run(prompt)
capture_run_messages()Context manager to inspect all model request/response objectsTestingcaptureRunMessages(() => agent.run(...)) returns messages[][]
ALLOW_MODEL_REQUESTS=FalseGlobal flag to prevent accidental real API callsTestingsetAllowModelRequests(false) - throws ModelRequestsDisabledError
TestModelAuto-generates valid structured data from schema, calls all toolsTestingnew TestModel() / createTestModel({ outputSchema }) - schema-aware
FunctionModelCustom function drives model responsesTestingnew FunctionModel((params) => result) - full control per turn

Multi-Agent

FeaturePydantic AIStatusDocsNotes
Agent-as-toolTool that calls child.run(usage=ctx.usage) internallyMulti-AgentPattern: tool({ execute: async (ctx, { prompt }) => { const r = await child.run(prompt, { deps: ctx.deps }); ... } })
Usage aggregationPass usage=ctx.usage to sub-agent to merge costsMulti-AgentManually add sub-agent usage to ctx.usage inside the tool
Programmatic hand-offApp code dispatches agents sequentiallyMulti-AgentDocumented pattern
pydantic_graph - FSMTyped state machine with BaseNodeGraphGraph, BaseNode, GraphRun
Graph state persistenceSimpleStatePersistence, FileStatePersistenceGraphMemoryStatePersistence, FileStatePersistence - pause/resume across restarts
Graph visualizationMermaid diagram generationGraphtoMermaid(graph, nodes) returns Mermaid flowchart string
Graph.iter() / .next()Manual stepping through graph nodesGraphgraph.runIter(state, startNode) returns GraphRun with .next() method
A2A protocolagent.to_a2a() - expose agent as ASGI A2A server- (docs coming soon)new A2AAdapter(agent, opts) - JSON-RPC handler with tasks/send, tasks/get, tasks/cancel, agent card at /.well-known/agent.json

Observability

FeaturePydantic AIStatusDocsNotes
Logfire integrationAuto-traces runs, turns, and tool calls-Not applicable (Logfire is Python-only)
OpenTelemetry supportOTel Gen-AI semantic conventionsOpenTelemetryinstrumentAgent(agent, opts) - uses AI SDK experimental_telemetry
Run-level spansStructured spans per run with metadataOpenTelemetryAI SDK auto-creates run spans when telemetry is enabled
Tool-level spansSpan per tool call with args and resultOpenTelemetryAI SDK auto-creates tool spans when telemetry is enabled
HTTPX instrumentationCapture raw HTTP request/response-Not applicable (no HTTPX in Deno/Node)
Custom TracerProviderBring your own OTel tracerOpenTelemetryPass tracer in TelemetrySettings via instrumentAgent or modelSettings
Content exclusionStrip prompt/response from spansOpenTelemetryexcludeContent: true on InstrumentationOptionsrecordInputs/Outputs: false

Evaluation framework (Pydantic Evals)

FeaturePydantic AIStatusDocsNotes
Datasets & CasesDataset, Case - typed test scenariosEvalsDataset.fromArray(), Dataset.fromJSON(), Dataset.fromFile()
Built-in evaluatorsExact match, type validationEvalsequalsExpected(), equals(), contains(), isInstance(), isValidSchema(), maxDuration(), hasMatchingSpan(), custom()
LLM-as-judgeLLM-based evaluators for subjective qualitiesEvalsllmJudge({ rubric, model }) + helpers judgeOutput(), judgeInputOutput(), etc.
Custom evaluatorsDomain-specific scoring functionsEvalscustom(name, fn) factory or implement Evaluator interface directly
Report-level evaluatorsConfusion matrix, precision/recall, ROC AUC, KSEvalsconfusionMatrix(), precisionRecall(), rocAuc(), kolmogorovSmirnov()
Span-based evaluationScore runs via OTel trace spansEvalsSpanTree, hasMatchingSpan() - build SpanTree.fromSpanData() from captured spans
ExperimentsRun and compare datasets across model/prompt combosEvalsdataset.evaluate(task, opts) or runExperiment({ dataset, task, evaluators })
Logfire integrationVisualize eval results in Logfire-Not applicable (Logfire is Python-only)
Async + concurrencyConfigurable concurrency and retries for evalsEvalsmaxConcurrency + maxRetries on EvaluateOptions; Semaphore-based
Dataset generationLLM-generated test cases from Zod schemasEvalsgenerateDataset({ model, inputSchema, nExamples })

Durable execution

FeaturePydantic AIStatusDocsNotes
Temporal integrationTemporalAgent - offloads model/tool calls to activitiesTemporalTemporalAgent + MockTemporalAgent - requires Node.js for Temporal worker
DBOS integrationPostgres-backed state checkpointing-Not implemented (skipped by design)
Prefect integrationTransactional task semantics with cache keys-Not implemented (skipped by design)

AG-UI Protocol

FeaturePydantic AIStatusDocsNotes
AG-UI event streamingAGUIAdapter.run_stream() - agent-to-UI eventsAG-UIAGUIAdapter.handleRequest(input) returns SSE Response
Follow-up messagingContinue conversation after tool call resultsAG-UIinput.messages history passed as messageHistory automatically
Structured event typesTyped event payloads for all agent actionsAG-UIAGUIEvent discriminated union with 16 event variants

Multi-Modal Support

FeaturePydantic AIStatusDocsNotes
Image input to toolsPass images into tool parametersMulti-ModalbinaryContentSchema in tool parameters; BinaryContent in execute
Audio / video inputAudio and video as tool parametersMulti-ModalBinaryContent with audio/video MIME types; isAudioContent() type guard
Document inputPDFs and documents as tool parametersMulti-ModalBinaryContent with application/pdf etc.; isDocumentContent() guard
UploadedFileFile reference for provider file uploadsMulti-ModalUploadedFile type + uploadedFileSchema + uploadedFileToToolResult()
BinaryImage outputAgent returns a generated imageMulti-ModaloutputSchema: BINARY_IMAGE_OUTPUT - agent returns BinaryContent when a tool produces an image/* result