Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.metabind.ai/llms.txt

Use this file to discover all available pages before exploring further.

The audit log records every tool call that hits your project. It’s the same surface across draft and production, across MCP hosts and Assistant SDK clients — one stream, one schema, one place to look when something is off.

What gets logged

For every tool call, the platform records a ToolExecution:
FieldDescription
timestampWhen the call started
durationMsHow long it took end-to-end
toolName / toolTypeWhich Type was called and whether it’s an Interactive or Data Tool
statussuccess or failure
errorTypeOne of configuration, runtime, or tool (when status: failure)
errorCodeSpecific error code, drawn from a fixed taxonomy
inputSha256 / outputSha256Content-addressed hashes of input and output payloads
hostWhere the call came from (Claude Desktop, ChatGPT, Assistant SDK, custom client)
callerThe user or organization on the calling side, if known
package_versionWhich published version of the package served the call
handler_logsconsole.log / console.warn / console.error output from the handler (Data Tools)
The full input and output payloads are stored alongside the execution record for review. For Data Tool failures, the log also captures the stack trace where available. For Interactive Tools, the log captures whether the BindJS spec compiled and rendered successfully.
Tool-call analytics surfaces in MCP App Studio are in active development. Data Tool execution records are captured today; Interactive Tool parity, the project-wide audit surface, and per-tool aggregates (call count, success/failure counts, p95 latency) ship by the time the project is publicly available.

Where to view it

In MCP App Studio:
  1. Open the project.
  2. Click Audit Log in the project sidebar.
  3. The log shows the most recent calls; filter by tool, host, time range, or status.
Screenshot needed: Audit log view in MCP App Studio with several rows visible — including a successful call, a schema-rejected call, and a handler error. Place at /images/operations/audit-log-overview.png.

Drilling into a single call

Click a row to see:
  • The full input the AI provided.
  • The full output the tool returned (or the error).
  • All console.log output from the handler, in order.
  • The package version that served the call.
  • The conversation context, if the host provided it (e.g., the conversation ID).
This is the primary debugging surface. When a tool misbehaves, this view tells you exactly what happened.
Screenshot needed: Audit log row expanded with full input, output, and handler log captured. Place at /images/operations/audit-log-detail.png.
The log filters by:
  • Time range. Last hour, last day, last 30 days, custom.
  • Tool name. Show only calls to product_search.
  • Status. Successful, schema rejection, handler error, allowlist rejection, timeout.
  • Host. Calls from Claude Desktop, calls from a specific Assistant SDK deployment.
  • Free text. Search across input and output for a specific string.
For programmatic access, the audit log has a REST API — see REST API: audit log — useful for piping to your own observability stack.

Console output capture

Inside a Data Tool handler, console.log, console.warn, and console.error are captured per call:
handler: async (props, env) => {
  console.log("Searching for:", props.searchTerm);

  const res = await fetch(`https://api.example.com/products?q=${props.searchTerm}`, {
    headers: { Authorization: `Bearer ${env.secrets.API_KEY}` }
  });

  if (!res.ok) {
    console.error("Upstream error:", res.status);
    throw new Error(`API returned ${res.status}`);
  }

  const data = await res.json();
  console.log("Found products:", data.products.length);
  return data;
}
These show up in the audit log row for that call. Don’t log secrets — the log is a normal observability surface and treats logs the same way other systems would.

Retention

PlanAudit log retention
Free7 days
Pro30 days
Enterprise90 days, configurable
For longer retention, export periodically via the REST API and archive in your own storage.

Schema rejections

When a schema validation fails, the log captures:
  • The input that was rejected.
  • The specific validation error path.
  • The expected schema for context.
Patterns in schema rejections are signal for tuning your schemas. If 5% of calls to a tool fail with the same validation error, the schema is probably too strict for what the AI is naturally producing.

Allowlist rejections

When a component reference fails the allowlist check, the log captures:
  • The disallowed component name.
  • The slot it was attempted in.
  • The current allowlist for that slot.
This usually means the AI tried to compose a layout the project doesn’t permit. Either expand the allowlist (if the use case is valid) or sharpen the slot’s description so the AI doesn’t try the disallowed component.

Handler errors

Handler failures log:
  • The error message.
  • The stack trace.
  • All console.log output up to the error.
  • The handler’s runtime (execution time before failure, memory used).
If a Data Tool handler is failing intermittently, the audit log usually reveals whether it’s an upstream API issue (specific status codes), a timeout (close to 60s execution), or a logic bug (consistent stack trace).

Tool-call reporting

A “tool-call report” rolls per-execution records up into aggregates over a window:
  • callCount per tool
  • successCount and failureCount per tool
  • avgDurationMs and p95DurationMs per tool
  • Top error codes per tool (broken down by configuration, runtime, tool)
Programmatic access is via GET .../usage/tools on the REST API; MCP App Studio surfaces the same data on a project-level Tool executions view with per-tool drill-down and CSV export.
Screenshot needed: Project-level Tool executions view in MCP App Studio with per-tool aggregates over a 7-day window and CSV export visible. Place at /images/operations/reports.png.

Compliance and export

For projects in regulated industries:
  • Audit log export. Pull the full log via REST API and store in your own SOC 2 / HIPAA / etc. compliant archive.
  • Retention extensions. Enterprise plans support longer retention windows.
  • Field redaction. Sensitive fields can be redacted before logging — configure per Type if needed.

Schema validation

Validation errors that show up in the audit log.

Sandboxed execution

What the handler logs capture.

REST API

Programmatic access to the audit log.

Team management

Who can view audit logs.