Skip to content

API Overview

Baponi is a sandboxed code execution platform for AI agents. Authenticate with an API key, send code, get results.

SurfaceBase URLAuth
Execution APIhttps://api.baponi.aiAPI key (Bearer token)
MCP Protocolhttps://api.baponi.ai/mcpAPI key (Bearer token)

All requests must use HTTPS. HTTP requests are rejected.

Every request to the Execution API requires an API key in the Authorization header:

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer sk-us-YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "print(1+1)", "language": "python"}'

API keys start with sk- followed by a region code and a base64-encoded token. Create keys in the admin console under API Keys.

Each API key is bound to:

  • A sandbox (runtime image, CPU, RAM, network policy)
  • Optional storage connections (S3, GCS, Azure buckets mounted as local directories)
  • Optional volumes (managed persistent storage)
  • Optional connectors (database and service credentials injected at runtime) This means the API key determines the entire execution environment. You configure everything once in the admin console, then every request with that key runs in the same environment.

All request bodies are JSON. Set Content-Type: application/json.

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "print(\"hello\")", "language": "python"}'

Baponi returns structured JSON errors with a machine-readable error code and a human-readable message:

{
"error": "validation_error",
"message": "code must be between 1 byte and 1 MB"
}
StatusError CodeDescription
400validation_errorRequest body failed validation. Check the message for details.
401unauthorizedMissing, invalid, or revoked API key.
403forbiddenValid key but insufficient permissions.
404not_foundResource does not exist.
409conflictResource conflict. Thread is busy, or duplicate name.
429rate_limitedToo many requests or plan limit exceeded.
500internal_errorServer error. Retry with backoff.
503service_unavailableExecution infrastructure temporarily unavailable.
504gateway_timeoutExecution exceeded the timeout.

If two requests use the same thread_id simultaneously, the second request returns 409 Conflict. Only one execution can use a given thread at a time. Use different thread_id values for parallel work.

Rate limiting applies in two cases:

  1. Failed authentication. 3 failed API key attempts per IP within 60 seconds triggers a temporary block.
  2. Plan limits. Exceeding your plan’s concurrent execution limit or other quotas returns 429 with an upgrade message.

Rate-limited responses include the standard error format:

{
"error": "rate_limited",
"message": "concurrent execution limit reached (5/5). Upgrade to Pro for 100 concurrent executions."
}

Resource limits depend on your plan tier:

LimitFreeProEnterprise
Max CPU per sandbox1 core4 coresUnlimited
Max RAM per sandbox1 GiB4 GiBUnlimited
Concurrent executions5100Unlimited
Max timeout60 seconds1 hourUnlimited
API keys10UnlimitedUnlimited
Audit log retention1 day30 daysUnlimited
Custom OCI images1UnlimitedUnlimited
Streaming / async deliveryNoYesYes

All tiers include: unlimited team seats, storage connections, volumes, and connectors. See Pricing for credit costs and overages.

Terminal window
pip install baponi
from baponi import Baponi
client = Baponi(api_key="sk-us-...") # or set BAPONI_API_KEY env var
result = client.execute("print('hello')")
print(result.stdout) # hello

The SDK supports all three delivery modes:

# Inline (default)
result = client.execute("print('hello')")
# Streaming - real-time stdout/stderr via NDJSON
with client.execute_stream("print('hello')") as stream:
result = stream.until_done()
# Webhook - fire-and-forget, result POSTed to your URL
handle = client.execute_webhook("print('hello')", webhook_url="https://...")
result = handle.wait()

All methods accept env_vars for per-request environment variables:

result = client.execute(
"import os; print(os.environ['API_TOKEN'])",
env_vars={"API_TOKEN": "sk-test-123"},
)

Status and cancellation:

status = client.get_execution("trc_abc12345")
cancel = client.cancel_execution("trc_abc12345")

The Python SDK includes ready-made integrations for LLM frameworks:

from baponi.anthropic import code_sandbox # Anthropic Claude
from baponi.openai import code_sandbox # OpenAI Agents SDK
from baponi.google import code_sandbox # Google Gemini
from baponi.langchain import code_sandbox # LangChain
from baponi.crewai import code_sandbox # CrewAI

Baponi is a native MCP server. Connect from any MCP client with a URL and API key, no SDK required:

{
"mcpServers": {
"baponi": {
"url": "https://api.baponi.ai/mcp",
"headers": {
"Authorization": "Bearer sk-us-YOUR_API_KEY"
}
}
}
}

See the MCP Protocol reference for details.

The API is a standard REST interface. Any HTTP client works:

import requests
response = requests.post(
"https://api.baponi.ai/v1/sandbox/execute",
headers={"Authorization": "Bearer sk-us-YOUR_API_KEY"},
json={"code": "print('hello')", "language": "python"},
).json()
MethodEndpointDescription
POST/v1/sandbox/executeExecute code in a sandbox (inline or streaming)
GET/v1/executions/{trace_id}Execution status and result
GET/v1/executions/{trace_id}/outputOutput chunks (streaming reconnection)
POST/v1/files/listList files in a thread or volume
POST/v1/files/upload_urlGet a signed upload URL
POST/v1/files/download_urlGet a signed download URL
DELETE/v1/filesDelete a file
GET/healthHealth check
GET/readyReadiness probe
MethodDescription
POSTJSON-RPC 2.0 endpoint (initialize, tools/list, tools/call)

Tools exposed: sandbox_execute, files_download. See MCP Protocol.