Skip to main content
Extended reasoning lets a model spend additional tokens on internal reasoning before generating a response. This produces more deliberate, multi-step outputs for complex problems - at the cost of higher token usage and latency. Use extended reasoning for complex multi-step problems, mathematical reasoning, and deep analytical tasks where a more careful thought process improves answer quality.
Vibes does not have a dedicated thinking API. Extended reasoning is configured at the model constructor level using provider-specific options from the Vercel AI SDK. The framework passes these options through automatically.

Anthropic extended thinking

Pass a thinking option directly to the anthropic() model constructor. Set budgetTokens to the maximum number of tokens the model can use for internal reasoning.
import { anthropic } from "@ai-sdk/anthropic";
import { Agent } from "@vibesjs/sdk";

const model = anthropic("claude-opus-4-5", {
  thinking: { type: "enabled", budgetTokens: 10000 },
});

const agent = new Agent({
  model,
  modelSettings: {
    maxTokens: 16000,  // must exceed thinking.budgetTokens
  },
});

const result = await agent.run(
  "Prove that the square root of 2 is irrational."
);
console.log(result.output);
maxTokens in modelSettings must be greater than thinking.budgetTokens. If maxTokens is too low, the API will return an error. For example, if budgetTokens is 10000, set maxTokens to at least 12000–16000 to leave room for the final response.

Google extended thinking

Some Gemini models enable extended reasoning by model selection - simply choosing a “thinking” model variant is sufficient. No additional configuration is required.
import { google } from "@ai-sdk/google";
import { Agent } from "@vibesjs/sdk";

const model = google("gemini-2-5-flash-thinking", {
  // Gemini thinking models enable extended reasoning by model selection.
  // No additional config required for default thinking mode.
});

const agent = new Agent({ model });

const result = await agent.run("Explain the P vs NP problem.");
console.log(result.output);
For other Gemini models that support configurable thinking, use the @ai-sdk/google provider’s constructor options in the same pattern as Anthropic.

Why is thinking not on AgentOptions?

The framework’s ModelSettings maps to Vercel AI SDK’s top-level generateText options - fields like maxTokens, temperature, and topP. These are standard options supported by all providers. Provider-specific features like thinking go through the model constructor’s providerOptions, which the AI SDK processes before the framework is involved. The Vibes framework does not intercept or re-expose these options - they flow from the model constructor directly to the provider. This means:
  • Correct: configure thinking on the model constructor: anthropic("model", { thinking: ... })
  • Incorrect: look for a thinking key on AgentOptions or modelSettings - it does not exist

API reference

ConfigurationLocationNotes
thinking optionModel constructor (e.g. anthropic("model", { thinking: ... }))Provider-specific; not on AgentOptions
modelSettings.maxTokensAgentOptions.modelSettings or run-time modelSettingsMust exceed budgetTokens when thinking is enabled
thinking.type"enabled"Required field for Anthropic thinking
thinking.budgetTokensnumberMax tokens for internal reasoning; maxTokens must be higher