AI-generated UI is only safe in production if every render is constrained. Metabind’s governance model is the platform feature that says: the AI cannot inject anything you didn’t approve, on any surface, ever. Four mechanisms compose to make that true. Metabind’s MCP server implements the open Model Context Protocol per SEP-1724, and Interactive Tools follow SEP-1865 (MCP Apps: Interactive User Interfaces for MCP) so the constraints described here travel with the protocol — any compliant host enforces them the same way.Documentation Index
Fetch the complete documentation index at: https://docs.metabind.ai/llms.txt
Use this file to discover all available pages before exploring further.
1. Schema validation on every render
Every Interactive Tool has an input schema, derived from the component’sproperties declaration. When the AI calls the tool, the platform validates the AI’s response against the schema before any rendering happens.
If the AI returns malformed data — a missing required field, a wrong type, an out-of-range number — the request is rejected and the renderer is never invoked. The error returns to the AI with specific schema-violation details, so the AI can correct its response on retry.
Validation is not opt-in. Every tool call goes through it.
2. Component allowlists
A layout component declares which view components the AI is allowed to compose inside each slot. The allowlist is part of the component’s BindJS property definitions — the slot property’sallowedComponents array — and is enforced server-side. Even if the AI references a component name not on the list, that component never reaches the renderer.
3. Sandboxed Data Tool execution
Data Tools run in V8 sandboxes — isolated JavaScript runtimes — not in the MCP server process. The sandbox enforces:- Secrets are injected via
env.secretsat runtime, never embedded in the component’s code or visible in the package bundle. - Outbound HTTP is restricted to allowed domains declared on the Data Tool. A handler cannot reach an arbitrary URL.
- Execution time and memory are capped (60 seconds, 128 MB by default). Runaway handlers terminate cleanly.
- No filesystem access, no environment variables, no cross-tenant exposure.
4. Versioning and rollback
Every published Type is pinned to a specific package version. Editing a published Type puts it inmodified status — the production endpoint continues to serve the last published version while the draft endpoint shows the working copy.
When you publish, the package version increments. Reverting is a metadata flip: change the project’s published package back to a previous version and production switches instantly. No redeploy, no rebuild, no waiting.
This means a bad publish never strands your users. You see the issue in audit logs, you roll back, production is whole again.
Audit and observability
Every tool invocation is logged with timestamp, input, output, and caller. Failed validations and rejected renders are tracked alongside successful ones, so a governance review can see what the AI tried to do, not just what got through. Each call carries an explicit error taxonomy —configuration, runtime, or tool — so failures separate cleanly when triaging.
Analytics roll usage up per tool: call count, success and failure counts, latency averages and p95s. The audit trail is a platform feature — not an analytics product you bolt on.
For depth on operational tooling — log retention, export, alerting — see Operations.
Why this matters
The MCP ecosystem at scale has not solved governance yet. Most production MCP servers fail compliance checks; large engineering teams routinely build custom MCP gateways from scratch — centralized auth, rate limiting, policy enforcement — to get what Metabind ships as platform features. For an enterprise team, the governance model is the difference between prototype and deployable. For a developer team, it’s the difference between writing infrastructure to constrain the AI and writing tools.What governance does not cover
Governance constrains what the AI can render. It does not constrain what the AI can say in text or what the LLM provider’s safety filters do. Those are concerns for the LLM provider and the host application, not the rendering platform. Governance also does not enforce business correctness — a Data Tool that returns the wrong price returns the wrong price. Schema validation catches malformed responses, not factually wrong ones. Tool design and the AI’s own reasoning carry that responsibility.Where governance lives in MCP App Studio
Governance settings appear in two places in the UI:- Component editor → property →
allowedComponentsdeclares which components can be composed in each slot of a layout. Authored in BindJS as part of the property definition. See Component allowlists. - Data Tool → Allowed Domains and Secrets scopes outbound HTTP and credentials per Data Tool. See Data Tools.
What to read next
Tools and Types
What the governance model is governing.
Components and Packages
How allowlists and packages combine for safe rendering.
Audit logs and tool-call reporting
The operational view of governance in production.
Schema validation in depth
How schema validation runs and what it catches.