Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.metabind.ai/llms.txt

Use this file to discover all available pages before exploring further.

Schema validation is the first server-side gate on every tool call. Before a Data Tool’s handler runs or an Interactive Tool’s BindJS layout compiles, the platform checks that the AI’s input matches the Type’s declared schema. Bad input never reaches your code.

Where validation runs

A tool call passes through schema validation, then either continues to the allowlist check or is rejected with a validation error returned to the AI.
Every Type’s input schema is generated from its bound component’s properties block. The platform validates incoming MCP tool calls against this schema using a JSON Schema validator. The result is binary — pass or reject.

What gets validated

Property featureValidated
Required fields presentyes
Field types matchyes (string, number, boolean, array, etc.)
enum valuesyes
min/max for numbersyes
minLength/maxLength for stringsyes
minItems/maxItems for arraysyes
pattern for strings (regex)yes
Nested group propertiesyes (recursive)
array items match valueTypeyes
Component allowlist (next gate)separately, after schema

What rejection looks like to the AI

When validation fails, the platform returns a structured error to the AI:
{
  "error": {
    "code": "schema_validation_failed",
    "message": "Property 'price' must be a string, got number",
    "path": "/price"
  }
}
Most AIs read the error and retry with corrected input. The retry loop is automatic — the AI doesn’t ask the user, it just adjusts. This means a tool with strict schemas is robust to AI hiccups: bad inputs get caught and corrected without the user seeing anything.

What rejection looks like to the user

If the AI exhausts its retries and still can’t produce valid input, the user sees an error in the host’s conversation:
The assistant tried to use product_search but the input
didn't match the expected format.
This usually indicates a deeper problem — a schema that’s too strict or unclear, or a tool description that doesn’t tell the AI what to provide. See Tools and Types for tuning tool descriptions.

Designing schemas the AI can satisfy

A few patterns:
  • Use description on every property. The AI reads descriptions to decide what to provide. searchTerm: "Term to search for" is much more discoverable than searchTerm: "".
  • Use enum when there’s a fixed set of valid values. status: "open" | "closed" | "pending" constrains the AI cleanly.
  • Default optional values. Mark fields as optional with sensible defaults rather than requiring the AI to always supply them.
  • Avoid overly tight pattern constraints. A regex that excludes valid AI outputs causes silent retries; relax the pattern unless the constraint is genuinely required.
  • Use minLength/minItems to prevent empty input. A tool that takes an empty array often does the wrong thing — fail at the schema instead.

Schema vs. handler validation

A common mistake: validating the same thing in both the schema and the handler. The schema runs first; if it passes, the handler can trust the input shape. So:
// In the schema:
properties: {
  email: {
    type: "string",
    pattern: "^[^@]+@[^@]+$",
    description: "User's email address"
  }
}

// In the handler:
handler: async (props, env) => {
  // No need to re-check email format — schema already did
  return await fetchUser(props.email);
}
Reserve handler-side validation for business rules (the email belongs to a real user, the date is in the future, the inventory exists) — things the schema can’t express.

Output schemas (Data Tools)

Data Tools also validate their output. The output schema runs after the handler returns:
output: {
  products: PropertyArray({ valueType: PropertyGroup({ ... }) }),
  total: PropertyNumber({})
}
If the handler returns something that doesn’t match — wrong field types, missing fields, extra fields — the platform rejects the response and returns an error to the AI. The AI sees a structured error and retries the tool call (or asks for help if retries are exhausted). This protects callers from a Data Tool that accidentally returns malformed data due to an upstream API change.

Component validation (Interactive Tools)

Interactive Tools validate two things:
  1. The Type’s input schema (same as Data Tools).
  2. The component reference in the input is on the project palette and on the slot’s allowlist.
Both gates run before the renderer is invoked. See Component allowlists.

Versioning and schema evolution

Published packages are immutable. When you add a property, rename one, or change a type, you produce a new package version. Connected hosts pick up the new schema on their next list-tools call — so a schema change in production is visible to AIs within seconds of publishing. For breaking changes (renamed property, dropped enum value), bump the package’s MAJOR version. See Package versioning.

Custom validation rules

If you need validation the JSON Schema vocabulary can’t express (cross-field constraints, lookups against external state), do it in your handler and throw a structured error:
handler: async (props, env) => {
  if (props.startDate > props.endDate) {
    throw new ToolError("invalid_input", "startDate must be before endDate");
  }
  // ...
}
Throwing ToolError returns a clean error to the AI, similar to schema rejection.

Observability

Every schema rejection appears in the project’s audit log with:
  • Timestamp
  • Tool name
  • Input that was rejected
  • Specific validation error
Use the audit log to find patterns — if 5% of product_search calls fail with “limit must be ≤ 50,” your schema might be too strict for what the AI naturally produces.

Component allowlists

The other server-side gate that runs after schema validation.

Audit logs

Per-call observability including validation rejections.

Tools and Types

How Type input schemas are derived from properties.

BindJS Reference

The full properties syntax for declaring schemas.