agent() wrapper adds significant value: tools, structured output, result validators, multi-turn conversation management, RunContext dependency injection, and human-in-the-loop flows. But sometimes you want none of that - just a single model call with a prompt and a response.
For those cases, reach for the Vercel AI SDK’s generateText and streamText directly.
Vibes does not re-export
generateText or streamText. Import them from "ai" and your provider package (e.g., "@ai-sdk/anthropic") directly.When to use agent vs. direct
Use the Agent when you need
- Tools - the model should call functions with side effects
- outputSchema - structured JSON output with Zod validation and retries
- Multi-turn conversation - message history carried across turns
- Result validators - custom post-processing that can trigger retries
- RunContext dependencies - database clients, loggers, auth context injected via DI
- Human-in-the-loop - approval gates before sensitive tool calls
Use direct model calls when you need
- One-shot text generation - summarize, translate, classify with no side effects
- Classification tasks -
"Is this text positive, negative, or neutral?"with a simple string response - Template filling - populate a template from structured data, no tools needed
- Simple summarization - compress a document into a paragraph
- Scripts and background tasks - cron jobs, batch processing where agent overhead isn’t warranted
generateText
UsegenerateText for synchronous, one-shot prompts where you need the full response before proceeding.
generateText returns a GenerateTextResult with the text field containing the model’s response, plus usage statistics and finish reason.
Classification example
streamText
UsestreamText when you want to display tokens as they arrive - ideal for chat UIs, long-form generation, or any scenario where perceived latency matters.
streamText returns a result object immediately. The textStream async iterable yields string chunks. Access result.text (a Promise) to await the full response after streaming completes.
Streaming with system prompt
Agent vs. direct: side-by-side
The direct path skips the agent loop entirely - no tool routing, no output validation retries, no multi-turn state. That simplicity is both its strength (less overhead, less complexity) and its limitation (no recovery if the output isn’t what you expected).Using both in the same project
It is completely valid - and often the right call - to mix agents and direct model calls in the same codebase.generateText/streamText for internal utilities, preprocessing steps, or simple transformations where the full agent machinery is overkill.
For the full
generateText and streamText API including temperature, max tokens, stop sequences, and provider-specific options, see the Vercel AI SDK documentation.