---
title: "Execute Code"
description: "POST /v1/sandbox/execute. Run Python, Node.js, or Bash code in an isolated sandbox with sub-20ms overhead. Full request/response reference with examples."
url: https://baponi.ai/docs/api/execute
lastUpdated: 2026-03-16
---
# Execute Code
Baponi is a sandboxed code execution platform for AI agents. The Execute Code endpoint runs Python, Node.js, or Bash in a multi-layer isolated sandbox and returns stdout, stderr, and an exit code. Sandbox overhead is typically 12-18ms. One required parameter (`code`), one HTTP call, structured JSON response.

## Endpoint

```
POST https://api.baponi.ai/v1/sandbox/execute
```

**Authentication:** Bearer token with an API key (`sk-us-...`).

**Content-Type:** `application/json`

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer sk-us-YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "print(\"Hello from Baponi!\")", "language": "python"}'
```

## Delivery modes

The execute endpoint supports three delivery modes, selected via request headers. The request body is identical for all modes (with an optional `webhook_url` field for webhook mode).

| Mode | Header | Response | Plan requirement |
|------|--------|----------|------------------|
| Inline (default) | `Accept: application/json` | Buffered JSON response after execution completes | All plans |
| Streaming | `Accept: application/x-ndjson` | Real-time NDJSON stream of output as it is produced | Pro or Enterprise |
| Webhook | `Prefer: respond-async` | 202 Accepted with `trace_id`, result POSTed to webhook URL | Pro or Enterprise |

Inline mode is the default. The connection stays open until execution finishes, then returns the full result as a single JSON object. This is the simplest integration path and works for executions that complete in seconds.

Streaming mode delivers stdout and stderr line-by-line as the code runs. Use it for long-running executions where you need real-time output, progress updates, or partial results before completion. See [Streaming execution (NDJSON)](#streaming-execution-ndjson) below.

Webhook mode returns `202 Accepted` immediately with a `trace_id`. The execution runs in the background and the result is POSTed to your webhook URL on completion. Use it for long-running executions, CI pipelines, or unreliable clients that can't hold a connection open. See [Webhook delivery](#webhook-delivery) below.

## Request body parameters

| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `code` | string | Yes | - | Code to execute. 1 byte to 1 MB. |
| `language` | string | No | `"python"` | `"python"`, `"node"`, or `"bash"` |
| `timeout` | integer | No | `60` | Max execution time in seconds. Range: 1-60 (Free), 1-3600 (Pro/Enterprise). |
| `thread_id` | string | No | `null` | Persist `/home/baponi` directory across calls. Max 128 chars, alphanumeric + hyphens/underscores. Must start with an alphanumeric character. |
| `metadata` | object | No | `null` | Key-value pairs for audit logging. Max 10 keys, key max 40 chars, value max 256 chars. Not sent to the sandbox. |
| `webhook_url` | string | No | `null` | Per-request webhook URL override for webhook delivery mode. HTTPS only. If omitted, uses the webhook URL configured on the API key. |
| `env_vars` | object | No | `null` | Environment variables injected into the sandbox for this execution. Keys must be uppercase letters, digits, and underscores (starting with a letter). See [environment variables](#environment-variables) for merge behavior and limits. |
| `sub_paths` | string[] | No | `null` | Narrow storage mounts to specific subdirectories. Each entry is `/data/{slug}/{path}`. Must respect connection-level and API key-level path constraints. Max 10 entries, 512 chars each. REST API only - deliberately excluded from MCP to prevent LLM-driven path manipulation. See [storage path scoping](#storage-path-scoping). |

The execution environment (runtime image, CPU, RAM, network policy, storage mounts, injected credentials) is configured in the [admin console](https://console.baponi.ai) and bound to your API key. You cannot override sandbox settings per-request. This is by design: the API key is the security boundary, and all configuration is centralized.

## Response body

A successful request returns `200 OK` with the execution result:

```json
{
  "success": true,
  "stdout": "Hello from Baponi!\n",
  "stderr": "",
  "exit_code": 0,
  "error": null
}
```

### Response fields

| Field | Type | Description |
|-------|------|-------------|
| `success` | boolean | `true` if `exit_code` is 0. |
| `stdout` | string | Standard output from the executed code. |
| `stderr` | string | Standard error from the executed code. |
| `exit_code` | integer | Process exit code. 0 means success, non-zero means the code exited with an error. |
| `error` | string or null | Error message if execution failed at the platform level (not a code error). `null` on success. |

`success: false` with a non-zero `exit_code` means your code ran but exited with an error (e.g., an unhandled exception). The `error` field is for platform-level failures. If `error` is non-null, the sandbox itself failed to run your code.

## Streaming execution (NDJSON)

To stream output in real time, set the `Accept` header to `application/x-ndjson`. The request body is the same as inline mode. The response is a chunked NDJSON stream where each line is a self-contained JSON object with a `type` field.

```bash
curl -N -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -H "Accept: application/x-ndjson" \
  -d '{"code": "import time\nfor i in range(3):\n    print(f\"step {i}\")\n    time.sleep(1)", "timeout": 60}'
```

### NDJSON event types

Every event includes a monotonic `seq` number for ordering and reconnection.

| `type` | Fields | Description |
|--------|--------|-------------|
| `status` | `trace_id`, `status`, `seq` | Execution has started. Emitted once at the beginning of the stream. |
| `output` | `stream`, `data`, `seq` | A line of output from the sandbox. `stream` is `"stdout"` or `"stderr"`. |
| `keepalive` | `seq` | Empty heartbeat sent every 15 seconds to prevent proxy and load balancer connection timeouts. |
| `result` | `trace_id`, `status`, `result`, `output_truncated`, `seq` | Execution completed (or was cancelled). `status` is `"success"`, `"failed"`, `"timeout"`, or `"cancelled"`. `result` contains the same fields as the inline JSON response. Always the last event. |

### Example stream

```jsonl
{"type":"status","trace_id":"trc_a1b2c3d4","status":"running","seq":1}
{"type":"output","stream":"stdout","data":"step 0\n","seq":2}
{"type":"output","stream":"stdout","data":"step 1\n","seq":3}
{"type":"output","stream":"stdout","data":"step 2\n","seq":4}
{"type":"result","trace_id":"trc_a1b2c3d4","status":"success","result":{"success":true,"stdout":"step 0\nstep 1\nstep 2\n","stderr":"","exit_code":0,"error":null},"output_truncated":false,"seq":5}
```

### Streaming behavior

- **Line-buffered output.** Output events are emitted on each newline (`\n`) from the sandbox process. Partial lines without a trailing newline are buffered until the next newline or until execution completes.
- **15-second keepalive interval.** The gateway emits a `keepalive` event every 15 seconds on idle streams to prevent proxies (nginx, Cloudflare, AWS ALB) from closing the connection.
- **`output_truncated` flag.** If the stream produced more than 10,000 output chunks, older chunks may be evicted from the reconnection buffer. The `result` event sets `output_truncated: true` to indicate this. The final `result.stdout` and `result.stderr` fields always contain the complete buffered output regardless.
- **Client disconnect.** If the client disconnects mid-stream, the execution continues to completion. Use the [execution status endpoint](/docs/api/executions.md#get-execution-status) to retrieve the final result and the [output reconnection endpoint](/docs/api/executions.md#reconnect-to-output-stream) to retrieve missed output chunks.
- **Cancellation.** A running streaming execution can be cancelled via `POST /v1/executions/{trace_id}/cancel`. The process is killed immediately and the stream emits a final `result` event with `status: "cancelled"` and `error: "Cancelled by user"`. See [Cancel a running execution](/docs/api/executions.md#cancel-a-running-execution).

### Streaming examples

```bash
# Stream output in real time (-N disables curl output buffering)
curl -N -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -H "Accept: application/x-ndjson" \
  -d '{
    "code": "for i in range(5):\n    import time; time.sleep(1)\n    print(f\"Processing batch {i}...\")",
    "timeout": 60
  }'
```

```python

from baponi import Baponi

client = Baponi()

with client.execute_stream(
    "for i in range(5):\n    import time; time.sleep(1)\n    print(f'Processing batch {i}...')",
    timeout=60,
) as stream:
    for event in stream:
        if isinstance(event, baponi.OutputEvent):
            print(f"[{event.stream}] {event.data}", end="")
    result = stream.get_final_result()
    print(f"\nDone: exit_code={result.exit_code}")
```

```python

response = requests.post(
    "https://api.baponi.ai/v1/sandbox/execute",
    headers={
        "Authorization": "Bearer sk-us-YOUR_API_KEY",
        "Content-Type": "application/json",
        "Accept": "application/x-ndjson",
    },
    json={
        "code": "for i in range(5):\n    import time; time.sleep(1)\n    print(f'Processing batch {i}...')",
        "timeout": 60,
    },
    stream=True,
)

for line in response.iter_lines():
    if line:
        event = json.loads(line)
        if event["type"] == "output":
            print(f"[{event['stream']}] {event['data']}", end="")
        elif event["type"] == "result":
            print(f"\nDone: exit_code={event['result']['exit_code']}")
```

Streaming execution requires a Pro or Enterprise plan. Free tier requests with `Accept: application/x-ndjson` return `429` with the message: "Streaming execution requires a Pro or Enterprise plan. Upgrade at https://baponi.ai/pricing"

## Webhook delivery

Send `Prefer: respond-async` to execute code asynchronously and receive the result via webhook.

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer sk-us-YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "Prefer: respond-async" \
  -d '{"code": "import time; time.sleep(60); print(\"done\")", "language": "python", "timeout": 120}'
```

The response is `202 Accepted` with a `trace_id`:

```json
{
  "trace_id": "trc_a1b2c3d4",
  "status": "running"
}
```

When the execution completes, Baponi POSTs the result to the webhook URL configured on your API key (or the per-request `webhook_url` if provided):

```json
{
  "event": "execution.completed",
  "trace_id": "trc_a1b2c3d4",
  "timestamp": "2026-03-16T12:00:00Z",
  "result": {
    "success": true,
    "stdout": "done\n",
    "stderr": "",
    "exit_code": 0,
    "duration_ms": 60142,
    "error": null
  },
  "output_truncated": false
}
```

If the execution is cancelled, the event is `execution.cancelled`.

### Webhook signing

If a webhook secret is configured on the API key, the delivery includes an `X-Baponi-Signature` header with an HMAC-SHA256 signature: `sha256=<hex>`. Verify it server-side to authenticate the request.

### Webhook retry

Delivery is attempted up to 3 times with backoff (1s, 10s, 60s). If all attempts fail, `webhook_delivery_status` is set to `"failed"`. Poll [`GET /v1/executions/{trace_id}`](/docs/api/executions.md) to retrieve the result.

### Webhook URL configuration

Configure a default webhook URL on each API key via the admin console or API. Per-request `webhook_url` in the body overrides the key default. Both must be HTTPS. SSRF protection validates the URL before accepting the request.

## Error responses

Error responses use a structured JSON format with a machine-readable `error` code and a human-readable `message`:

```json
{
  "error": "validation_error",
  "message": "code must be between 1 byte and 1 MB"
}
```

### HTTP status codes

| Status | Error Code | When |
|--------|------------|------|
| `400` | `validation_error` | Invalid `code` length, unrecognized `language`, malformed `thread_id`, or `metadata` exceeds limits. |
| `401` | `unauthorized` | Missing, invalid, or revoked API key. |
| `409` | `conflict` | Another execution is already using this `thread_id`. Only one execution per thread at a time. |
| `429` | `rate_limited` | Concurrent execution limit exceeded for your plan, requested `timeout` exceeds your plan's maximum, or streaming requested on Free tier. |
| `503` | `service_unavailable` | No executor pod is available to handle the request. Retry with backoff. |
| `504` | `gateway_timeout` | Execution exceeded the specified `timeout`. The process was killed. |

Rate-limited responses include an actionable message:

```json
{
  "error": "rate_limited",
  "message": "concurrent execution limit reached (5/5). Upgrade to Pro for 100 concurrent executions."
}
```

See the [API Overview](/docs/api/overview.md#error-responses) for the full error reference.

## Examples

### Run Python, Node.js, or Bash

```bash
# Python (default language)
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "import sys; print(f\"Python {sys.version}\")"}'

# Node.js
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "console.log(`Node.js ${process.version}`)", "language": "node"}'

# Bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "echo \"Bash $BASH_VERSION\" && uname -a", "language": "bash"}'
```

```python
from baponi import Baponi

client = Baponi(api_key="sk-us-YOUR_API_KEY")  # or set BAPONI_API_KEY env var

# Python (default)
result = client.execute("import sys; print(f'Python {sys.version}')")
print(result.stdout)

# Node.js
result = client.execute("console.log(`Node.js ${process.version}`)", language="node")
print(result.stdout)

# Bash
result = client.execute("echo \"Bash $BASH_VERSION\" && uname -a", language="bash")
print(result.stdout)
```

### Persist state across calls with thread_id

Without `thread_id`, every execution is fully ephemeral, nothing persists. With `thread_id`, the `/home/baponi` directory is saved to cloud storage after execution and restored on the next call with the same `thread_id`. Installed packages, created files, and environment modifications in the home directory all carry over.

```bash
# Call 1: Install a package and create a file
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "pip install --user pandas && echo done",
    "language": "bash",
    "thread_id": "analysis-session-1"
  }'

# Call 2: pandas is already installed, pick up where you left off
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "import pandas as pd; print(pd.__version__)",
    "language": "python",
    "thread_id": "analysis-session-1"
  }'
```

```python
from baponi import Baponi

client = Baponi()

# Call 1: Install pandas
result = client.execute(
    "pip install --user pandas && echo done",
    language="bash",
    thread_id="analysis-session-1",
)
print(result.stdout)  # done

# Call 2: pandas is already installed
result = client.execute(
    "import pandas as pd; print(pd.__version__)",
    thread_id="analysis-session-1",
)
print(result.stdout)  # 2.2.3
```

Between calls, nothing is running and there is no idle billing. Baponi saves only the diff to cloud storage and restores it on the next call. You can resume a thread minutes or days later.

### Attach metadata for audit logging

Metadata is attached to the execution record for debugging and audit queries. It is NOT sent to the sandbox. Your code cannot read metadata values.

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "print(\"processing order\")",
    "metadata": {
      "user_id": "usr_abc123",
      "request_id": "req_xyz789",
      "agent": "order-processor-v2"
    }
  }'
```

```python
from baponi import Baponi

client = Baponi()

result = client.execute(
    "print('processing order')",
    metadata={
        "user_id": "usr_abc123",
        "request_id": "req_xyz789",
        "agent": "order-processor-v2",
    },
)
```

Query execution history with metadata filters in the [admin console](https://console.baponi.ai).

### Set a custom timeout

The default timeout is 60 seconds. Free tier maximum is 60 seconds. Pro tier maximum is 3600 seconds (1 hour). Enterprise is configurable. When the timeout is reached, the process is killed immediately.

```bash
# Long-running data processing (Pro tier required for >60s)
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "import time; time.sleep(120); print(\"done\")",
    "timeout": 180
  }'
```

```python
from baponi import Baponi

client = Baponi()

result = client.execute(
    "import time; time.sleep(120); print('done')",
    timeout=180,
)
print(result.stdout)  # done
```

If the timeout is exceeded, the response has `exit_code: -1` and `error` describes the timeout:

```json
{
  "success": false,
  "stdout": "",
  "stderr": "",
  "exit_code": -1,
  "error": "execution timed out after 30s"
}
```

### Handle errors in your code

```bash
# Code that raises an exception
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "raise ValueError(\"something went wrong\")"}'
```

Response:

```json
{
  "success": false,
  "stdout": "",
  "stderr": "Traceback (most recent call last):\n  File \"/home/baponi/main.py\", line 1, in <module>\n    raise ValueError(\"something went wrong\")\nValueError: something went wrong\n",
  "exit_code": 1,
  "error": null
}
```

```python
from baponi import Baponi

client = Baponi()

result = client.execute("raise ValueError('something went wrong')")

if not result.success:
    print(f"Code failed with exit code {result.exit_code}")
    print(f"stderr: {result.stderr}")
else:
    print(result.stdout)
```

Check `success` (or `exit_code`) to determine if your code ran successfully. Check `error` to determine if the platform itself failed. A non-zero `exit_code` with `error: null` means your code ran but exited with an error. This is normal for unhandled exceptions, assertion failures, or `sys.exit(1)`.

### Pass environment variables

Environment variables are injected into the sandbox and accessible to your code via standard APIs (`os.environ` in Python, `process.env` in Node.js, `$VAR` in Bash). Use them to pass configuration, API keys, or feature flags without embedding them in code. Per-request `env_vars` are merged with variables set on the sandbox and API key in the admin console, with request values taking highest precedence. See [environment variables](#environment-variables) for the full merge model.

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "import os; print(f\"Target: {os.environ[\"TARGET_URL\"]}\")",
    "env_vars": {
      "TARGET_URL": "https://api.example.com",
      "LOG_LEVEL": "debug"
    }
  }'
```

```python
from baponi import Baponi

client = Baponi()

result = client.execute(
    "import os; print(f'Target: {os.environ[\"TARGET_URL\"]}')",
    env_vars={
        "TARGET_URL": "https://api.example.com",
        "LOG_LEVEL": "debug",
    },
)
print(result.stdout)  # Target: https://api.example.com
```

See [environment variables](#environment-variables) for the full merge model and validation rules.

### Narrow storage mounts with sub_paths

When your API key has storage connections (BYOB buckets or managed volumes), the sandbox mounts them at `/data/{slug}/`. By default, the mount exposes everything the connection and API key allow. Pass `sub_paths` to narrow a mount to a specific subdirectory for this execution only.

```bash
# Only mount the Q1 reports subdirectory from the "company-data" bucket
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "import os; print(os.listdir(\"/data/company-data/\"))",
    "sub_paths": ["/data/company-data/reports/q1-2026"]
  }'
```

```python
from baponi import Baponi

client = Baponi()

result = client.execute(
    "import os; print(os.listdir('/data/company-data/'))",
    sub_paths=["/data/company-data/reports/q1-2026"],
)
print(result.stdout)  # Only files under reports/q1-2026/
```

The sandbox sees `/data/company-data/` as the mount point, but only the `reports/q1-2026` subtree is accessible. The agent cannot see or access anything outside that path.

See [storage path scoping](#storage-path-scoping) for the full three-level constraint model and validation rules.

### Full request with all parameters

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "import os, pandas as pd\ndf = pd.read_csv(\"/data/company-data/customers.csv\")\nprint(df.describe().to_json())",
    "language": "python",
    "timeout": 60,
    "thread_id": "data-analysis-session",
    "metadata": {
      "user_id": "usr_abc123",
      "workflow": "csv-analysis",
      "step": "describe"
    },
    "env_vars": {
      "OUTPUT_FORMAT": "json",
      "LOG_LEVEL": "info"
    },
    "sub_paths": ["/data/company-data/customers"]
  }'
```

```python
from baponi import Baponi

client = Baponi()

result = client.execute(
    code='import os, pandas as pd\ndf = pd.read_csv("/data/company-data/customers.csv")\nprint(df.describe().to_json())',
    language="python",
    timeout=60,
    thread_id="data-analysis-session",
    metadata={
        "user_id": "usr_abc123",
        "workflow": "csv-analysis",
        "step": "describe",
    },
    env_vars={
        "OUTPUT_FORMAT": "json",
        "LOG_LEVEL": "info",
    },
    sub_paths=["/data/company-data/customers"],
)

if result.success:
    print(result.stdout)
else:
    print(f"Failed: {result.stderr}")
```

## Behavior reference

### Supported languages and default runtimes

| `language` value | Runtime | Version |
|------------------|---------|---------|
| `"python"` (default) | Python | 3.14 |
| `"node"` | Node.js | 25 |
| `"bash"` | GNU Bash | Latest |

These are the default runtime images. You can import any OCI-compatible image through the [admin console](https://console.baponi.ai) and Baponi auto-discovers available interpreters. Custom images support any language or toolchain.

### thread_id: stateful execution across calls

- **Without `thread_id`:** Fully ephemeral. Nothing persists after the call returns.
- **With `thread_id`:** The `/home/baponi` directory is saved to cloud storage after execution and restored on the next call with the same `thread_id`.
- **One execution per thread at a time.** Concurrent requests to the same `thread_id` return `409 Conflict`. Use different `thread_id` values for parallel work.
- **What persists:** Installed packages (pip, npm), created files, environment modifications, anything written to `/home/baponi`.
- **What does not persist:** `/tmp` is always ephemeral. System-level changes outside `/home/baponi` are discarded.
- **Format:** Max 128 characters. Alphanumeric characters, hyphens (`-`), and underscores (`_`) only. Must start with an alphanumeric character.

### Network access

Network policy is determined by the sandbox configuration bound to your API key (configured in the admin console):

| Policy | Behavior |
|--------|----------|
| `blocked` (default) | No outbound network access. DNS, HTTP, and all other protocols are blocked. |
| `unrestricted` | Full internet access. Outbound bytes are metered internally for billing. |

You cannot change the network policy per-request. To switch between blocked and unrestricted, create separate API keys bound to different sandboxes.

### Timeout limits by plan tier

| Plan | Default | Maximum |
|------|---------|---------|
| Free | 60s | 60s |
| Pro ($97/mo) | 60s | 3600s (1 hour) |
| Enterprise | 60s | Configurable |

Requesting a `timeout` above your plan's maximum returns `429` with an upgrade message. It is not silently capped.

### Metadata constraints

| Constraint | Limit |
|------------|-------|
| Max keys | 10 |
| Key length | 1-40 characters |
| Value length | Max 256 characters |

Metadata is stored on the execution record and queryable in the [admin console](https://console.baponi.ai). It is never sent to the sandbox. Your executed code cannot read metadata values.

### Environment variables

Environment variables can be set at three levels. When the same key appears at multiple levels, the most specific scope wins:

1. **Sandbox** (lowest precedence) - configured in the admin console, applied to every execution using that sandbox.
2. **API key** - configured in the admin console on each API key, applied to every execution using that key.
3. **Request** (highest precedence) - passed in the `env_vars` field of the request body, applied to that execution only.

All three sources are merged at execution time. If the same key exists at multiple levels, the request value overrides the API key value, which overrides the sandbox value.

#### Validation rules

| Constraint | Limit |
|------------|-------|
| Max variables (combined after merge) | 50 |
| Key format | Uppercase letters, digits, underscores. Must start with a letter. |
| Key length | 1-128 characters |
| Value length | Max 4,096 characters |
| Total size (all keys + values combined) | 64 KB |
| Reserved names | System names (e.g., `PATH`, `HOME`) and platform prefixes are blocked. |

Keys and values are validated at each level independently. After the 3-way merge, the combined set is checked against the 50-variable and 64 KB limits. Requests that exceed post-merge limits return `400 validation_error`.

### Storage path scoping

Storage mounts support three levels of path constraints: connection prefix, API key prefix, and per-request `sub_paths`. Each level can only narrow the scope, never widen it. Constraints are enforced server-side with segment-aware prefix matching and path traversal rejection.

| Constraint | Limit |
|------------|-------|
| Max `sub_paths` entries per request | 10 |
| Entry format | `/data/{slug}/{path}` where `{slug}` matches a mounted storage connection |
| Entry length | Max 512 characters |
| Duplicate slugs | One entry per connection per request |

`sub_paths` is deliberately excluded from MCP to prevent prompt injection attacks from manipulating path selection. MCP executions use connection and API key prefixes set by an admin in the console.

For the full constraint model, worked examples, common patterns, and security properties, see the [Storage Path Scoping guide](/docs/guides/storage-path-scoping.md).

## Plan limits that affect this endpoint

| Limit | Free | Pro | Enterprise |
|-------|------|-----|------------|
| Max CPU per sandbox | 1 core | 4 cores | Unlimited |
| Max RAM per sandbox | 1 GiB | 4 GiB | Unlimited |
| Concurrent executions | 5 | 100 | Unlimited |
| Max timeout | 60s | 3600s (1 hour) | Unlimited |
| Streaming / async delivery | No | Yes | Yes |

See [Pricing](/pricing) for credit costs. One credit = 60 seconds of execution at 1 CPU + 1 GiB RAM. Credits scale proportionally with time and resources.