Skip to content

Quickstart: First Execution in 60 Seconds

Baponi is a sandboxed code execution platform for AI agents. Send code, get results. No containers to manage, no infrastructure to provision. This guide gets you from zero to your first execution in under 60 seconds.

What you’ll need: A web browser and a terminal (or Python installed).

  1. Sign up at console.baponi.ai

    Create a free account. No credit card required. You get 1,000 free credits per month, unlimited team seats, and audit logs on every tier.

  2. Create an API key

    In the admin console, go to API KeysCreate API Key. Give it a name and copy the key. It starts with sk- and looks like sk-us-....

  3. Run your first code

    Terminal window
    curl -X POST https://api.baponi.ai/v1/sandbox/execute \
    -H "Authorization: Bearer YOUR_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
    "code": "print(\"Hello from Baponi!\")",
    "language": "python"
    }'

That’s it. Your code ran in a fully isolated sandbox with multi-layer security. The sandbox booted in under 20ms, ran your code, and returned the result.

Every execution returns the same structure:

{
"success": true,
"stdout": "Hello from Baponi!\n",
"stderr": "",
"exit_code": 0
}
FieldDescription
successtrue if exit code is 0
stdoutStandard output from your code
stderrStandard error from your code
exit_codeProcess exit code (0 = success)

Baponi supports Python 3.14, Node.js 25, and Bash out of the box.

Terminal window
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"code": "import sys; print(f\"Python {sys.version}\")", "language": "python"}'

Need a custom runtime? Import any OCI-compatible image through the admin console. Baponi auto-discovers available interpreters and system libraries.

Persist state between calls with thread_id

Section titled “Persist state between calls with thread_id”

By default, every execution is fully ephemeral. Add thread_id to keep the home directory (/home/baponi) across calls. Installed packages, working files, and in-progress work all persist:

Terminal window
# Call 1: install cowsay
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "pip install --user cowsay && echo done",
"language": "bash",
"thread_id": "quickstart-demo-1"
}'
# Call 2: cowsay is already installed
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"code": "import cowsay; cowsay.cow(\"Hello from Baponi!\")",
"language": "python",
"thread_id": "quickstart-demo-1"
}'

Between calls, nothing is running. Baponi saves only the diff to cloud storage and restores it on the next call. No idle billing. Pick it up minutes or days later.

Connect to Claude, Cursor, or any MCP client

Section titled “Connect to Claude, Cursor, or any MCP client”

Baponi is a native MCP (Model Context Protocol) server. If your tool supports MCP, add Baponi with a URL and API key. No SDK, no tool definitions, no glue code.

Add to your claude_desktop_config.json:

{
"mcpServers": {
"baponi": {
"url": "https://api.baponi.ai/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}

Once configured, the MCP client auto-discovers Baponi’s tools (sandbox_execute, files_download). Ask Claude or Cursor to write and run code, and it calls Baponi automatically.

Give Claude code execution with 4 lines of Python

Section titled “Give Claude code execution with 4 lines of Python”

Using the Python SDK with Anthropic’s tool_runner:

from anthropic import Anthropic
from baponi.anthropic import code_sandbox
client = Anthropic()
response = client.beta.messages.tool_runner(
model="claude-sonnet-4-6",
tools=[code_sandbox],
messages=[{"role": "user", "content": "Calculate the first 20 Fibonacci numbers"}],
).get_final_response()
print(response.content[0].text)
Terminal window
pip install baponi[anthropic]

Claude writes Python, Baponi runs it in a sandbox, Claude reads the result. The same pattern works with OpenAI, Gemini, LangChain, and CrewAI:

from baponi.openai import code_sandbox # OpenAI Agents SDK
from baponi.google import code_sandbox # Google Gemini
from baponi.langchain import code_sandbox # LangChain
from baponi.crewai import code_sandbox # CrewAI
ParameterRequiredDefaultDescription
codeYes-Code to execute (1 byte to 1 MB)
languageNopythonpython, node, or bash
timeoutNo60Max execution time in seconds (1-3600, plan-dependent)
thread_idNo-Persist /home/baponi across calls
metadataNo-Key-value pairs for audit logging (not sent to sandbox)
env_varsNo-Environment variables injected into the sandbox

One required parameter. Everything else (the runtime image, CPU, RAM, network policy, storage mounts, injected credentials) is configured once in the admin console and attached to your API key. See the full Execute API reference for all parameters, response fields, error codes, and worked examples.

  • How Baponi Works: deep dive into the execution model, BYOB storage, connectors, and the full API surface
  • API Reference: full request/response reference for Execute, Files, and MCP
  • Admin Console: configure sandboxes, storage connections, connectors, and API keys
  • Pricing: 1,000 free credits/month, unlimited seats, no credit card required

How many free credits do I get? 1,000 credits per month on the free tier. One credit = 60 seconds of execution at 1 CPU + 1 GiB RAM. That’s roughly 1,000 minutes of compute per month at the base allocation, enough for evaluation and prototyping workloads.

What does it cost after the free tier? $1.00 per 1,000 additional credits. Or upgrade to Pro ($97/month) for higher resource limits (4 CPU, 4 GiB RAM), 1-hour max timeout, 100 concurrent executions, and 30-day audit retention.

Do I need the Python SDK? No. The API is a single HTTP endpoint. Any language with an HTTP client works. The SDK adds convenience: automatic retries, typed responses, and ready-made LLM framework integrations.

Is my code executed securely? Every execution runs in a fresh sandbox with multi-layer isolation. Sandboxes have no access to other sandboxes, the host system, or other customers’ data. Baponi maintains a $10,000 bug bounty program.

Can I self-host Baponi? Yes. Baponi deploys to any Kubernetes cluster with a single Helm chart. Same platform, same API, same admin console, running on your infrastructure, in your VPC, with your identity provider.