Vibes uses the Vercel AI SDK model abstraction, accepting any AI SDK-compatible provider. Swap models at construction time or override per run.
Vibes accepts any Vercel AI SDKLanguageModel instance as its model option. This means every provider that ships a Vercel AI SDK adapter - Anthropic, OpenAI, Google, Groq, Mistral, Ollama, and custom OpenAI-compatible endpoints - works with Vibes out of the box. The model is passed at agent construction and can be overridden per run.For installation and initial setup, see Getting Started.
Install the provider package for your chosen model, set the API key environment variable, and pass the model instance to Agent.
All provider packages follow the Vercel AI SDK spec. Install the provider package for your chosen model, set the API key env var, and pass the model instance to Agent.
import { Agent } from "@vibesjs/sdk";import { google } from "@ai-sdk/google";// Requires: GOOGLE_GENERATIVE_AI_API_KEY environment variableconst agent = new Agent({ model: google("gemini-2.0-flash"), systemPrompt: "You are helpful.",});
import { Agent } from "@vibesjs/sdk";import { ollama } from "ollama-ai-provider";// No API key required - connects to your local Ollama serverconst agent = new Agent({ model: ollama("llama3.2"), systemPrompt: "You are helpful.",});
Use agent.override({ model }) to swap the model for a single run without modifying the agent. This is useful for routing different prompts to different models - for example, a cheaper model for classification and an expensive model for generation:
const cheapModel = groq("llama-3.3-70b-versatile");const richModel = anthropic("claude-sonnet-4-6");// Use cheap model for simple classificationconst label = await agent .override({ model: cheapModel }) .run("Classify: is this spam? " + email);// Use powerful model for generationconst reply = await agent .override({ model: richModel }) .run("Draft a professional reply to: " + email);