Skip to main content

Introduction

Evals (evaluations) let you measure how well your agent or task function performs across a set of test cases. Rather than checking a single output manually, you define a Dataset of inputs (with optional expected outputs), run your function over all of them, and score each result with Evaluators. This is a 1-1 TypeScript port of Pydantic AI’s evals module. Key benefits:
  • Systematic testing - catch regressions across many input scenarios
  • Quantitative scoring - go beyond pass/fail with numeric scores and summary statistics
  • LLM-as-judge - use a language model to evaluate subjective qualities like helpfulness or accuracy
  • Concurrency - run cases in parallel with configurable limits
  • Immutable data - all dataset transformations return new objects

Quick start

import {
  Dataset,
  equalsExpected,
  formatReport,
} from "@vibesjs/sdk";

// 1. Define your dataset
const dataset = Dataset.fromArray([
  { name: "uppercase hello", inputs: "hello", expectedOutput: "HELLO" },
  { name: "uppercase world", inputs: "world", expectedOutput: "WORLD" },
], {
  evaluators: [equalsExpected()],
});

// 2. Define your task
async function uppercaseTask(input: string): Promise<string> {
  return input.toUpperCase();
}

// 3. Run the experiment
const result = await dataset.evaluate(uppercaseTask);

// 4. Print a report
console.log(formatReport(result));
Output:
============================================================
Eval Report
============================================================
Timestamp:      2025-01-01T12:00:00.000Z
Total Duration: 3ms
Cases:          2

------------------------------------------------------------
Summary
------------------------------------------------------------
Evaluator        Mean      Min      Max    PassRate
------------------------------------------------------
equalsExpected  1.000    1.000    1.000      100.0%

------------------------------------------------------------
Cases
------------------------------------------------------------
[OK] uppercase hello (1ms)
  equalsExpected: pass - output matches expectedOutput
[OK] uppercase world (0ms)
  equalsExpected: pass - output matches expectedOutput
============================================================

Dataset and case

A Dataset is an immutable collection of Cases. Each Case has:
FieldTypeRequiredDescription
inputsTInputYesThe input passed to your task function
namestringNoHuman-readable label for this case
expectedOutputTExpectedNoReference output for evaluators
metadataRecord<string, unknown>NoArbitrary metadata attached to this case
evaluatorsEvaluator[]NoPer-case evaluators (added to dataset evaluators)

Creating datasets

// From an array
const ds = Dataset.fromArray(cases, { name: "my-dataset", evaluators: [...] });

// From a JSON string or object
const ds = Dataset.fromJSON<string, string>(jsonString);
const ds = Dataset.fromJSON<string, string>({ cases: [...] });

// From a file
const ds = await Dataset.fromFile<string, string>("./cases.json");

// From raw text
const ds = Dataset.fromText<string, string>(text, "json");

Immutable transformations

// Filter cases (returns a new Dataset)
const subset = ds.filter((c) => c.metadata?.env === "production");

// Map cases to a new type (returns a new Dataset)
const transformed = ds.map((c) => ({
  ...c,
  inputs: c.inputs.trim(),
}));

// Iterate
for (const c of ds) {
  console.log(c.name, c.inputs);
}

Serialization

// To JSON object (evaluators excluded  -  they are functions)
const json = ds.toJSON();

// Write to file
await ds.toFile("./cases.json");

Built-in evaluators

All built-in evaluators implement the Evaluator interface and return an EvalScore.

Case-level evaluators

EvaluatorDescriptionScore type
equalsExpected()Strict equality between output and expectedOutputboolean
equals(value)Strict equality between output and a fixed valueboolean
contains(str)String output contains substringboolean
isInstance(type)typeof output === typeNameboolean
maxDuration(sec)Task completed within the time limitboolean
hasMatchingSpan(pred)Span tree contains a node matching predicateboolean
isValidSchema(schema)Output validates against a Zod schemaboolean
custom(name, fn)User-defined evaluator functionany
import {
  contains,
  equals,
  equalsExpected,
  isInstance,
  isValidSchema,
  maxDuration,
  custom,
} from "@vibesjs/sdk";
import { z } from "zod";

const evaluators = [
  equalsExpected(),
  equals("expected value"),
  contains("keyword", { caseSensitive: false }),
  isInstance("string"),
  maxDuration(5), // 5 seconds
  isValidSchema(z.object({ name: z.string() })),
  custom("my-check", (ctx) => ({
    score: ctx.output !== null,
    reason: "output must not be null",
  })),
];

EvalScore

Every evaluator returns an EvalScore:
interface EvalScore {
  score: number | boolean | string;
  label?: string;
  reason?: string;
}

LLM-as-judge

For subjective qualities (helpfulness, accuracy, tone), use an LLM as the evaluator.
import { llmJudge, setDefaultJudgeModel } from "@vibesjs/sdk";
import { openai } from "@ai-sdk/openai";

// Optional: set a default model once
setDefaultJudgeModel(openai("gpt-4o-mini"));

const ev = llmJudge({
  rubric: "Is the response helpful, accurate, and concise?",
  // model: openai("gpt-4o"), // override per-evaluator
  includeInput: true,         // include task input in judge context
  includeExpectedOutput: true, // include expected output in judge context
  score: false,                // false = boolean, true = numeric 0-1
});

Helper functions

For one-off judge calls (outside of a dataset):
import {
  judgeOutput,
  judgeInputOutput,
  judgeOutputExpected,
  judgeInputOutputExpected,
} from "@vibesjs/sdk";

const score = await judgeOutput(output, "Is it helpful?", model);
const score = await judgeInputOutput(input, output, "Does it answer the question?", model);
const score = await judgeOutputExpected(output, expected, "Does it match?", model);
const score = await judgeInputOutputExpected(input, output, expected, "Correct?", model);

Custom evaluators

Implement the Evaluator interface for full control:
import type { Evaluator, EvaluatorContext, EvalScore } from "@vibesjs/sdk";

const sentimentEvaluator: Evaluator<string, undefined> = {
  name: "positive-sentiment",
  evaluate(ctx: EvaluatorContext<unknown, undefined, string>): EvalScore {
    const output = ctx.output ?? "";
    const positive = output.includes("good") || output.includes("great");
    return {
      score: positive,
      reason: positive ? "output has positive sentiment" : "output lacks positive words",
    };
  },
};
Or use the custom() factory:
import { custom } from "@vibesjs/sdk";

const ev = custom("word-count", (ctx) => ({
  score: typeof ctx.output === "string" && ctx.output.split(" ").length >= 10,
  reason: "response should be at least 10 words",
}));

Accessing context

The EvaluatorContext gives evaluators access to:
ctx.inputs          // task input
ctx.output          // task output (undefined if task threw)
ctx.expectedOutput  // expected output (from Case)
ctx.metadata        // case metadata
ctx.spanTree        // OTel span tree (if captured)
ctx.usage           // token usage (if captured)
ctx.durationMs      // task wall-clock duration in ms

// Accumulate data from within an evaluator
ctx.setEvalAttribute("raw-score", 0.85);
ctx.incrementEvalMetric("tokens-checked", 42);

Report-level evaluators

Report evaluators run once after all cases complete and receive the full CaseResult[] array. Use them for aggregate metrics.
import {
  confusionMatrix,
  precisionRecall,
  rocAuc,
  kolmogorovSmirnov,
} from "@vibesjs/sdk";

const ds = Dataset.fromArray(cases, {
  reportEvaluators: [
    confusionMatrix({
      getLabel: (r) => r.output as string,
      getExpected: (r) => r.case.expectedOutput as string,
    }),
    precisionRecall({
      getPositive: (r) => r.output as boolean,
      getExpected: (r) => r.case.expectedOutput as boolean,
    }),
    rocAuc({
      getScore: (r) => r.output as number,
      getLabel: (r) => r.case.expectedOutput as boolean,
    }),
    kolmogorovSmirnov({
      getScoreA: (r, i) => scoresA[i],
      getScoreB: (r, i) => scoresB[i],
    }),
  ],
});

Experiment runner

Use Dataset.evaluate() as your primary API. The runExperiment() function is a thin wrapper for cases where you want to merge extra evaluators at call time:
import { runExperiment, equalsExpected } from "@vibesjs/sdk";

const result = await runExperiment({
  dataset: myDataset,
  task: async (input) => agent.run(input).then(r => r.output),
  evaluators: [equalsExpected()], // merged with dataset.evaluators
  maxConcurrency: 5,
  maxRetries: 2,
  onCaseComplete: (r) => console.log(`Done: ${r.case.name}`),
});

Evaluate options

OptionTypeDefaultDescription
maxConcurrencynumber5Maximum concurrent case evaluations
maxRetriesnumber1Maximum task retry attempts per case
onCaseCompletefunction-Callback invoked after each case completes

Span-based evaluation

When your task function captures OTel spans, you can evaluate them with hasMatchingSpan:
import { SpanTree, hasMatchingSpan } from "@vibesjs/sdk";

// Build a SpanTree from captured span data
const tree = SpanTree.fromSpanData(capturedSpans);

// Use in an evaluator
const ev = hasMatchingSpan(
  (node) => node.name === "llm-call" && node.status === "ok",
  "llm-call-succeeded",
);

// Or traverse programmatically
const llmSpans = tree.find((n) => n.name.startsWith("llm"));
const anyErrors = tree.any((n) => n.status === "error");
const callCount = tree.count((n) => n.name === "tool-call");

Dataset generation

Use an LLM to generate test cases automatically:
import { generateDataset } from "@vibesjs/sdk";
import { z } from "zod";
import { openai } from "@ai-sdk/openai";

const ds = await generateDataset({
  model: openai("gpt-4o"),
  nExamples: 10,
  inputSchema: z.object({
    question: z.string().describe("A geography trivia question"),
  }),
  expectedOutputSchema: z.string().describe("The correct answer"),
  extraInstructions: "Focus on capital cities of European countries.",
});

Eval pipeline

Concurrency flow