Skip to content

Execute Code

Baponi is a sandboxed code execution platform for AI agents. The Execute Code endpoint runs Python, Node.js, or Bash in a multi-layer isolated sandbox and returns stdout, stderr, and an exit code. Sandbox overhead is typically 12-18ms. One required parameter (code), one HTTP call, structured JSON response.

POST https://api.baponi.ai/v1/sandbox/execute

Authentication: Bearer token with an API key (sk-us-...).

Content-Type: application/json

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer sk-us-YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "print(\"Hello from Baponi!\")", "language": "python"}'

The execute endpoint supports three delivery modes, selected via request headers. The request body is identical for all modes (with an optional webhook_url field for webhook mode).

ModeHeaderResponsePlan requirement
Inline (default)Accept: application/jsonBuffered JSON response after execution completesAll plans
StreamingAccept: application/x-ndjsonReal-time NDJSON stream of output as it is producedPro or Enterprise
WebhookPrefer: respond-async202 Accepted with trace_id, result POSTed to webhook URLPro or Enterprise

Inline mode is the default. The connection stays open until execution finishes, then returns the full result as a single JSON object. This is the simplest integration path and works for executions that complete in seconds.

Streaming mode delivers stdout and stderr line-by-line as the code runs. Use it for long-running executions where you need real-time output, progress updates, or partial results before completion. See Streaming execution (NDJSON) below.

Webhook mode returns 202 Accepted immediately with a trace_id. The execution runs in the background and the result is POSTed to your webhook URL on completion. Use it for long-running executions, CI pipelines, or unreliable clients that can’t hold a connection open. See Webhook delivery below.

ParameterTypeRequiredDefaultDescription
codestringYes-Code to execute. 1 byte to 1 MB.
languagestringNo"python""python", "node", or "bash"
timeoutintegerNo60Max execution time in seconds. Range: 1-60 (Free), 1-3600 (Pro/Enterprise).
thread_idstringNonullPersist /home/baponi directory across calls. Max 128 chars, alphanumeric + hyphens/underscores. Must start with an alphanumeric character.
metadataobjectNonullKey-value pairs for audit logging. Max 10 keys, key max 40 chars, value max 256 chars. Not sent to the sandbox.
webhook_urlstringNonullPer-request webhook URL override for webhook delivery mode. HTTPS only. If omitted, uses the webhook URL configured on the API key.
env_varsobjectNonullEnvironment variables injected into the sandbox for this execution. Keys must be uppercase letters, digits, and underscores (starting with a letter). See environment variables for merge behavior and limits.
sub_pathsstring[]NonullNarrow storage mounts to specific subdirectories. Each entry is /data/{slug}/{path}. Must respect connection-level and API key-level path constraints. Max 10 entries, 512 chars each. REST API only - deliberately excluded from MCP to prevent LLM-driven path manipulation. See storage path scoping.

The execution environment (runtime image, CPU, RAM, network policy, storage mounts, injected credentials) is configured in the admin console and bound to your API key. You cannot override sandbox settings per-request. This is by design: the API key is the security boundary, and all configuration is centralized.

A successful request returns 200 OK with the execution result:

{
"success": true,
"stdout": "Hello from Baponi!\n",
"stderr": "",
"exit_code": 0,
"error": null
}
FieldTypeDescription
successbooleantrue if exit_code is 0.
stdoutstringStandard output from the executed code.
stderrstringStandard error from the executed code.
exit_codeintegerProcess exit code. 0 means success, non-zero means the code exited with an error.
errorstring or nullError message if execution failed at the platform level (not a code error). null on success.

To stream output in real time, set the Accept header to application/x-ndjson. The request body is the same as inline mode. The response is a chunked NDJSON stream where each line is a self-contained JSON object with a type field.

Terminal window
curl -N -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-H "Accept: application/x-ndjson" \
-d '{"code": "import time\nfor i in range(3):\n print(f\"step {i}\")\n time.sleep(1)", "timeout": 60}'

Every event includes a monotonic seq number for ordering and reconnection.

typeFieldsDescription
statustrace_id, status, seqExecution has started. Emitted once at the beginning of the stream.
outputstream, data, seqA line of output from the sandbox. stream is "stdout" or "stderr".
keepaliveseqEmpty heartbeat sent every 15 seconds to prevent proxy and load balancer connection timeouts.
resulttrace_id, status, result, output_truncated, seqExecution completed (or was cancelled). status is "success", "failed", "timeout", or "cancelled". result contains the same fields as the inline JSON response. Always the last event.
{"type":"status","trace_id":"trc_a1b2c3d4","status":"running","seq":1}
{"type":"output","stream":"stdout","data":"step 0\n","seq":2}
{"type":"output","stream":"stdout","data":"step 1\n","seq":3}
{"type":"output","stream":"stdout","data":"step 2\n","seq":4}
{"type":"result","trace_id":"trc_a1b2c3d4","status":"success","result":{"success":true,"stdout":"step 0\nstep 1\nstep 2\n","stderr":"","exit_code":0,"error":null},"output_truncated":false,"seq":5}
  • Line-buffered output. Output events are emitted on each newline (\n) from the sandbox process. Partial lines without a trailing newline are buffered until the next newline or until execution completes.
  • 15-second keepalive interval. The gateway emits a keepalive event every 15 seconds on idle streams to prevent proxies (nginx, Cloudflare, AWS ALB) from closing the connection.
  • output_truncated flag. If the stream produced more than 10,000 output chunks, older chunks may be evicted from the reconnection buffer. The result event sets output_truncated: true to indicate this. The final result.stdout and result.stderr fields always contain the complete buffered output regardless.
  • Client disconnect. If the client disconnects mid-stream, the execution continues to completion. Use the execution status endpoint to retrieve the final result and the output reconnection endpoint to retrieve missed output chunks.
  • Cancellation. A running streaming execution can be cancelled via POST /v1/executions/{trace_id}/cancel. The process is killed immediately and the stream emits a final result event with status: "cancelled" and error: "Cancelled by user". See Cancel a running execution.
Terminal window
# Stream output in real time (-N disables curl output buffering)
curl -N -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-H "Accept: application/x-ndjson" \
-d '{
"code": "for i in range(5):\n import time; time.sleep(1)\n print(f\"Processing batch {i}...\")",
"timeout": 60
}'

Send Prefer: respond-async to execute code asynchronously and receive the result via webhook.

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer sk-us-YOUR_API_KEY" \
-H "Content-Type: application/json" \
-H "Prefer: respond-async" \
-d '{"code": "import time; time.sleep(60); print(\"done\")", "language": "python", "timeout": 120}'

The response is 202 Accepted with a trace_id:

{
"trace_id": "trc_a1b2c3d4",
"status": "running"
}

When the execution completes, Baponi POSTs the result to the webhook URL configured on your API key (or the per-request webhook_url if provided):

{
"event": "execution.completed",
"trace_id": "trc_a1b2c3d4",
"timestamp": "2026-03-16T12:00:00Z",
"result": {
"success": true,
"stdout": "done\n",
"stderr": "",
"exit_code": 0,
"duration_ms": 60142,
"error": null
},
"output_truncated": false
}

If the execution is cancelled, the event is execution.cancelled.

If a webhook secret is configured on the API key, the delivery includes an X-Baponi-Signature header with an HMAC-SHA256 signature: sha256=<hex>. Verify it server-side to authenticate the request.

Delivery is attempted up to 3 times with backoff (1s, 10s, 60s). If all attempts fail, webhook_delivery_status is set to "failed". Poll GET /v1/executions/{trace_id} to retrieve the result.

Configure a default webhook URL on each API key via the admin console or API. Per-request webhook_url in the body overrides the key default. Both must be HTTPS. SSRF protection validates the URL before accepting the request.

Error responses use a structured JSON format with a machine-readable error code and a human-readable message:

{
"error": "validation_error",
"message": "code must be between 1 byte and 1 MB"
}
StatusError CodeWhen
400validation_errorInvalid code length, unrecognized language, malformed thread_id, or metadata exceeds limits.
401unauthorizedMissing, invalid, or revoked API key.
409conflictAnother execution is already using this thread_id. Only one execution per thread at a time.
429rate_limitedConcurrent execution limit exceeded for your plan, requested timeout exceeds your plan’s maximum, or streaming requested on Free tier.
503service_unavailableNo executor pod is available to handle the request. Retry with backoff.
504gateway_timeoutExecution exceeded the specified timeout. The process was killed.

Rate-limited responses include an actionable message:

{
"error": "rate_limited",
"message": "concurrent execution limit reached (5/5). Upgrade to Pro for 100 concurrent executions."
}

See the API Overview for the full error reference.

Terminal window
# Python (default language)
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "import sys; print(f\"Python {sys.version}\")"}'
# Node.js
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "console.log(`Node.js ${process.version}`)", "language": "node"}'
# Bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "echo \"Bash $BASH_VERSION\" && uname -a", "language": "bash"}'

Without thread_id, every execution is fully ephemeral, nothing persists. With thread_id, the /home/baponi directory is saved to cloud storage after execution and restored on the next call with the same thread_id. Installed packages, created files, and environment modifications in the home directory all carry over.

Terminal window
# Call 1: Install a package and create a file
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "pip install --user pandas && echo done",
"language": "bash",
"thread_id": "analysis-session-1"
}'
# Call 2: pandas is already installed, pick up where you left off
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "import pandas as pd; print(pd.__version__)",
"language": "python",
"thread_id": "analysis-session-1"
}'

Between calls, nothing is running and there is no idle billing. Baponi saves only the diff to cloud storage and restores it on the next call. You can resume a thread minutes or days later.

Metadata is attached to the execution record for debugging and audit queries. It is NOT sent to the sandbox. Your code cannot read metadata values.

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "print(\"processing order\")",
"metadata": {
"user_id": "usr_abc123",
"request_id": "req_xyz789",
"agent": "order-processor-v2"
}
}'

Query execution history with metadata filters in the admin console.

The default timeout is 60 seconds. Free tier maximum is 60 seconds. Pro tier maximum is 3600 seconds (1 hour). Enterprise is configurable. When the timeout is reached, the process is killed immediately.

Terminal window
# Long-running data processing (Pro tier required for >60s)
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "import time; time.sleep(120); print(\"done\")",
"timeout": 180
}'

If the timeout is exceeded, the response has exit_code: -1 and error describes the timeout:

{
"success": false,
"stdout": "",
"stderr": "",
"exit_code": -1,
"error": "execution timed out after 30s"
}
Terminal window
# Code that raises an exception
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "raise ValueError(\"something went wrong\")"}'

Response:

{
"success": false,
"stdout": "",
"stderr": "Traceback (most recent call last):\n File \"/home/baponi/main.py\", line 1, in <module>\n raise ValueError(\"something went wrong\")\nValueError: something went wrong\n",
"exit_code": 1,
"error": null
}

Environment variables are injected into the sandbox and accessible to your code via standard APIs (os.environ in Python, process.env in Node.js, $VAR in Bash). Use them to pass configuration, API keys, or feature flags without embedding them in code. Per-request env_vars are merged with variables set on the sandbox and API key in the admin console, with request values taking highest precedence. See environment variables for the full merge model.

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "import os; print(f\"Target: {os.environ[\"TARGET_URL\"]}\")",
"env_vars": {
"TARGET_URL": "https://api.example.com",
"LOG_LEVEL": "debug"
}
}'

See environment variables for the full merge model and validation rules.

When your API key has storage connections (BYOB buckets or managed volumes), the sandbox mounts them at /data/{slug}/. By default, the mount exposes everything the connection and API key allow. Pass sub_paths to narrow a mount to a specific subdirectory for this execution only.

Terminal window
# Only mount the Q1 reports subdirectory from the "company-data" bucket
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "import os; print(os.listdir(\"/data/company-data/\"))",
"sub_paths": ["/data/company-data/reports/q1-2026"]
}'

The sandbox sees /data/company-data/ as the mount point, but only the reports/q1-2026 subtree is accessible. The agent cannot see or access anything outside that path.

See storage path scoping for the full three-level constraint model and validation rules.

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "import os, pandas as pd\ndf = pd.read_csv(\"/data/company-data/customers.csv\")\nprint(df.describe().to_json())",
"language": "python",
"timeout": 60,
"thread_id": "data-analysis-session",
"metadata": {
"user_id": "usr_abc123",
"workflow": "csv-analysis",
"step": "describe"
},
"env_vars": {
"OUTPUT_FORMAT": "json",
"LOG_LEVEL": "info"
},
"sub_paths": ["/data/company-data/customers"]
}'
language valueRuntimeVersion
"python" (default)Python3.14
"node"Node.js25
"bash"GNU BashLatest

These are the default runtime images. You can import any OCI-compatible image through the admin console and Baponi auto-discovers available interpreters. Custom images support any language or toolchain.

thread_id: stateful execution across calls

Section titled “thread_id: stateful execution across calls”
  • Without thread_id: Fully ephemeral. Nothing persists after the call returns.
  • With thread_id: The /home/baponi directory is saved to cloud storage after execution and restored on the next call with the same thread_id.
  • One execution per thread at a time. Concurrent requests to the same thread_id return 409 Conflict. Use different thread_id values for parallel work.
  • What persists: Installed packages (pip, npm), created files, environment modifications, anything written to /home/baponi.
  • What does not persist: /tmp is always ephemeral. System-level changes outside /home/baponi are discarded.
  • Format: Max 128 characters. Alphanumeric characters, hyphens (-), and underscores (_) only. Must start with an alphanumeric character.

Network policy is determined by the sandbox configuration bound to your API key (configured in the admin console):

PolicyBehavior
blocked (default)No outbound network access. DNS, HTTP, and all other protocols are blocked.
unrestrictedFull internet access. Outbound bytes are metered internally for billing.

You cannot change the network policy per-request. To switch between blocked and unrestricted, create separate API keys bound to different sandboxes.

PlanDefaultMaximum
Free60s60s
Pro ($97/mo)60s3600s (1 hour)
Enterprise60sConfigurable

Requesting a timeout above your plan’s maximum returns 429 with an upgrade message. It is not silently capped.

ConstraintLimit
Max keys10
Key length1-40 characters
Value lengthMax 256 characters

Metadata is stored on the execution record and queryable in the admin console. It is never sent to the sandbox. Your executed code cannot read metadata values.

Environment variables can be set at three levels. When the same key appears at multiple levels, the most specific scope wins:

  1. Sandbox (lowest precedence) - configured in the admin console, applied to every execution using that sandbox.
  2. API key - configured in the admin console on each API key, applied to every execution using that key.
  3. Request (highest precedence) - passed in the env_vars field of the request body, applied to that execution only.

All three sources are merged at execution time. If the same key exists at multiple levels, the request value overrides the API key value, which overrides the sandbox value.

ConstraintLimit
Max variables (combined after merge)50
Key formatUppercase letters, digits, underscores. Must start with a letter.
Key length1-128 characters
Value lengthMax 4,096 characters
Total size (all keys + values combined)64 KB
Reserved namesSystem names (e.g., PATH, HOME) and platform prefixes are blocked.

Keys and values are validated at each level independently. After the 3-way merge, the combined set is checked against the 50-variable and 64 KB limits. Requests that exceed post-merge limits return 400 validation_error.

Storage mounts support three levels of path constraints: connection prefix, API key prefix, and per-request sub_paths. Each level can only narrow the scope, never widen it. Constraints are enforced server-side with segment-aware prefix matching and path traversal rejection.

ConstraintLimit
Max sub_paths entries per request10
Entry format/data/{slug}/{path} where {slug} matches a mounted storage connection
Entry lengthMax 512 characters
Duplicate slugsOne entry per connection per request

sub_paths is deliberately excluded from MCP to prevent prompt injection attacks from manipulating path selection. MCP executions use connection and API key prefixes set by an admin in the console.

For the full constraint model, worked examples, common patterns, and security properties, see the Storage Path Scoping guide.

LimitFreeProEnterprise
Max CPU per sandbox1 core4 coresUnlimited
Max RAM per sandbox1 GiB4 GiBUnlimited
Concurrent executions5100Unlimited
Max timeout60s3600s (1 hour)Unlimited
Streaming / async deliveryNoYesYes

See Pricing for credit costs. One credit = 60 seconds of execution at 1 CPU + 1 GiB RAM. Credits scale proportionally with time and resources.