Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.metabind.ai/llms.txt

Use this file to discover all available pages before exploring further.

Data Tool handlers run inside V8 sandboxes that the platform provisions per execution. This guide covers what the sandbox provides, what it forbids, and how to design handlers that operate cleanly within the constraints.

What runs where

The MCP server receives a tool call, validates against the schema, runs the handler in a V8 sandbox with env.secrets injected, and reaches external APIs only on the allowed-domains list.
The sandbox is a fresh V8 isolate per call. There is no state shared across invocations and no shared filesystem. Each handler call is independent.

What the sandbox provides

CapabilityDescription
fetch()Outbound HTTP, restricted to the Data Tool’s allowed domains
console.log / console.warn / console.errorLogged to the platform observability layer
Standard JavaScript runtime (V8)ES2022 syntax, async/await, Promises, JSON, etc.
env.secretsA map of the secrets configured on the Data Tool
env.organizationId, env.projectIdTenant identifiers for the request
env.apiBaseURLThe Metabind API base URL for callbacks
env.localeThe caller’s locale, when known
The handler signature is always:
handler: async (props, env) => {
  // props is the validated input
  // env is the runtime environment
  return /* output matching the data component's `output` schema */;
}

What the sandbox forbids

ForbiddenWhy
Filesystem accessNo isolation between tools otherwise
Process environment variablesUse env.secrets instead
Network requests to non-allowed domainsCloses the credential exfiltration path
Cross-tenant dataEach invocation runs as the caller’s project only
Long-running execution (> 60 s default)Prevents runaway handlers from blocking the platform
Memory beyond the limit (128 MB default)Prevents memory exhaustion
Native modules / requirePure JavaScript only
Most of these are obvious from a security standpoint. The execution time and memory limits are platform stability — long-running tools should use the task pattern (see Task support below) rather than blocking the sandbox.

Secrets

Secrets are scoped to the Data Tool, not the project. Different tools can hold different keys — a Stripe key on one tool, a SendGrid key on another. Secrets are encrypted at rest via AWS KMS, decrypted at sandbox start, and accessible inside the handler as env.secrets.<NAME>.
handler: async (props, env) => {
  const apiKey = env.secrets.STRIPE_API_KEY;
  if (!apiKey) {
    throw new Error("STRIPE_API_KEY secret not configured");
  }

  const res = await fetch("https://api.stripe.com/v1/charges", {
    headers: { Authorization: `Bearer ${apiKey}` }
  });

  return res.json();
}
MCP App Studio never returns secret values in API responses. After a secret is set, only the secret’s name is visible in the UI; the value can only be replaced, not read.

Allowed domains

The sandbox restricts outbound HTTP to the domains listed on the Data Tool. A handler that tries to fetch from a domain not on the list fails before the request leaves the sandbox. Configure allowed domains on the Data Tool:
api.example.com
api-secondary.example.com
Wildcards are not currently supported — list each domain explicitly. Subdomains are matched by host string. This rule closes a class of credential-leak vectors: even if a handler’s logic is buggy and tries to send a token to an attacker-controlled URL, the request fails because the URL isn’t on the allowlist.

Execution limits

Default limits per invocation:
LimitDefaultNotes
Execution time60 secondsHard ceiling. Handlers that exceed are terminated.
Memory128 MBHard ceiling. Handlers that exceed are terminated.
Outbound HTTP body sizeImplementation-definedStream large responses if you need to.
Console log outputBuffered per callUse sparingly; high-volume logs are throttled.

Task support (long-running operations)

If a Data Tool needs more than 60 seconds — a long-running search, an external API that paginates, an AI-generated image — declare task support. The tool returns an immediate task token; the host polls for completion. Configure on the Data Tool:
SettingBehavior
taskSupport: forbiddenThe tool is synchronous only. Default.
taskSupport: optionalThe tool returns synchronously when fast, switches to task pattern when slow.
taskSupport: requiredThe tool always returns a task token; the host always polls.
Inside a task-supporting handler, use the task helper APIs (covered in Task patterns once shipped) to push intermediate progress and the final result.

Designing for the sandbox

A few principles that come up in practice:
  • Treat each call as cold. No caching across invocations inside the handler. Cache externally if you need to.
  • Fail fast. Validate that secrets exist at the top of the handler. Throw with a useful message — the AI sees the error and can adjust.
  • Use small, focused handlers. A Data Tool that does one thing well is easier to reason about than one that branches across many APIs.
  • Stream when possible. If you’re proxying a large response, prefer streaming over buffering.
  • Don’t log sensitive data. console.log output is captured by platform observability; treat it like any other log surface.

Debugging

When a handler fails:
  1. The error returns to the test panel (or to the AI on retry).
  2. The error appears in the project’s tool-call audit log with the timestamp, input, and stack trace where available.
  3. console.log output is captured per call.
For deeper observability, see Audit logs and tool-call reporting.

Build a Data Tool

The end-to-end Data Tool walkthrough.

Governance

Where sandboxed execution fits in the broader governance model.

Audit logs

Per-call observability for debugging and review.

BindJS Reference

defineDataSource and the property system.