Quickstart: First Execution in 60 Seconds
Baponi is a sandboxed code execution platform for AI agents. Send code, get results. No containers to manage, no infrastructure to provision. This guide gets you from zero to your first execution in under 60 seconds.
What you’ll need: A web browser and a terminal (or Python installed).
3 steps to your first execution
Section titled “3 steps to your first execution”-
Sign up at console.baponi.ai
Create a free account. No credit card required. You get 1,000 free credits per month, unlimited team seats, and audit logs on every tier.
-
Create an API key
In the admin console, go to API Keys → Create API Key. Give it a name and copy the key. It starts with
sk-and looks likesk-us-.... -
Run your first code
Terminal window curl -X POST https://api.baponi.ai/v1/sandbox/execute \-H "Authorization: Bearer YOUR_API_KEY" \-H "Content-Type: application/json" \-d '{"code": "print(\"Hello from Baponi!\")","language": "python"}'Terminal window pip install baponifrom baponi import Baponiclient = Baponi(api_key="YOUR_API_KEY")result = client.execute("print('Hello from Baponi!')")print(result.stdout) # Hello from Baponi!Or set the
BAPONI_API_KEYenvironment variable and skip theapi_keyparameter:Terminal window export BAPONI_API_KEY="YOUR_API_KEY"from baponi import Baponiclient = Baponi() # reads BAPONI_API_KEY from envresult = client.execute("print('Hello from Baponi!')")import requestsresult = requests.post("https://api.baponi.ai/v1/sandbox/execute",headers={"Authorization": "Bearer YOUR_API_KEY"},json={"code": "print('Hello from Baponi!')", "language": "python"},).json()print(result["stdout"]) # Hello from Baponi!
That’s it. Your code ran in a fully isolated sandbox with multi-layer security. The sandbox booted in under 20ms, ran your code, and returned the result.
The response
Section titled “The response”Every execution returns the same structure:
{ "success": true, "stdout": "Hello from Baponi!\n", "stderr": "", "exit_code": 0}| Field | Description |
|---|---|
success | true if exit code is 0 |
stdout | Standard output from your code |
stderr | Standard error from your code |
exit_code | Process exit code (0 = success) |
Try all 3 languages
Section titled “Try all 3 languages”Baponi supports Python 3.14, Node.js 25, and Bash out of the box.
curl -X POST https://api.baponi.ai/v1/sandbox/execute \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{"code": "import sys; print(f\"Python {sys.version}\")", "language": "python"}'curl -X POST https://api.baponi.ai/v1/sandbox/execute \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{"code": "console.log(`Node.js ${process.version}`)", "language": "node"}'curl -X POST https://api.baponi.ai/v1/sandbox/execute \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{"code": "echo \"Bash $BASH_VERSION\" && uname -a", "language": "bash"}'Need a custom runtime? Import any OCI-compatible image through the admin console. Baponi auto-discovers available interpreters and system libraries.
Persist state between calls with thread_id
Section titled “Persist state between calls with thread_id”By default, every execution is fully ephemeral. Add thread_id to keep the home directory (/home/baponi) across calls. Installed packages, working files, and in-progress work all persist:
# Call 1: install cowsaycurl -X POST https://api.baponi.ai/v1/sandbox/execute \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "code": "pip install --user cowsay && echo done", "language": "bash", "thread_id": "quickstart-demo-1" }'
# Call 2: cowsay is already installedcurl -X POST https://api.baponi.ai/v1/sandbox/execute \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "code": "import cowsay; cowsay.cow(\"Hello from Baponi!\")", "language": "python", "thread_id": "quickstart-demo-1" }'Between calls, nothing is running. Baponi saves only the diff to cloud storage and restores it on the next call. No idle billing. Pick it up minutes or days later.
Connect to Claude, Cursor, or any MCP client
Section titled “Connect to Claude, Cursor, or any MCP client”Baponi is a native MCP (Model Context Protocol) server. If your tool supports MCP, add Baponi with a URL and API key. No SDK, no tool definitions, no glue code.
Add to your claude_desktop_config.json:
{ "mcpServers": { "baponi": { "url": "https://api.baponi.ai/mcp", "headers": { "Authorization": "Bearer YOUR_API_KEY" } } }}claude mcp add baponi \ --transport http \ --url https://api.baponi.ai/mcp \ --header "Authorization: Bearer YOUR_API_KEY"Add to your Cursor MCP settings (.cursor/mcp.json):
{ "mcpServers": { "baponi": { "url": "https://api.baponi.ai/mcp", "headers": { "Authorization": "Bearer YOUR_API_KEY" } } }}Once configured, the MCP client auto-discovers Baponi’s tools (sandbox_execute, files_download). Ask Claude or Cursor to write and run code, and it calls Baponi automatically.
Give Claude code execution with 4 lines of Python
Section titled “Give Claude code execution with 4 lines of Python”Using the Python SDK with Anthropic’s tool_runner:
from anthropic import Anthropicfrom baponi.anthropic import code_sandbox
client = Anthropic()response = client.beta.messages.tool_runner( model="claude-sonnet-4-6", tools=[code_sandbox], messages=[{"role": "user", "content": "Calculate the first 20 Fibonacci numbers"}],).get_final_response()
print(response.content[0].text)pip install baponi[anthropic]Claude writes Python, Baponi runs it in a sandbox, Claude reads the result. The same pattern works with OpenAI, Gemini, LangChain, and CrewAI:
from baponi.openai import code_sandbox # OpenAI Agents SDKfrom baponi.google import code_sandbox # Google Geminifrom baponi.langchain import code_sandbox # LangChainfrom baponi.crewai import code_sandbox # CrewAIAPI parameters at a glance
Section titled “API parameters at a glance”| Parameter | Required | Default | Description |
|---|---|---|---|
code | Yes | - | Code to execute (1 byte to 1 MB) |
language | No | python | python, node, or bash |
timeout | No | 60 | Max execution time in seconds (1-3600, plan-dependent) |
thread_id | No | - | Persist /home/baponi across calls |
metadata | No | - | Key-value pairs for audit logging (not sent to sandbox) |
env_vars | No | - | Environment variables injected into the sandbox |
One required parameter. Everything else (the runtime image, CPU, RAM, network policy, storage mounts, injected credentials) is configured once in the admin console and attached to your API key. See the full Execute API reference for all parameters, response fields, error codes, and worked examples.
What to explore next
Section titled “What to explore next”- How Baponi Works: deep dive into the execution model, BYOB storage, connectors, and the full API surface
- API Reference: full request/response reference for Execute, Files, and MCP
- Admin Console: configure sandboxes, storage connections, connectors, and API keys
- Pricing: 1,000 free credits/month, unlimited seats, no credit card required
How many free credits do I get? 1,000 credits per month on the free tier. One credit = 60 seconds of execution at 1 CPU + 1 GiB RAM. That’s roughly 1,000 minutes of compute per month at the base allocation, enough for evaluation and prototyping workloads.
What does it cost after the free tier? $1.00 per 1,000 additional credits. Or upgrade to Pro ($97/month) for higher resource limits (4 CPU, 4 GiB RAM), 1-hour max timeout, 100 concurrent executions, and 30-day audit retention.
Do I need the Python SDK? No. The API is a single HTTP endpoint. Any language with an HTTP client works. The SDK adds convenience: automatic retries, typed responses, and ready-made LLM framework integrations.
Is my code executed securely? Every execution runs in a fresh sandbox with multi-layer isolation. Sandboxes have no access to other sandboxes, the host system, or other customers’ data. Baponi maintains a $10,000 bug bounty program.
Can I self-host Baponi? Yes. Baponi deploys to any Kubernetes cluster with a single Helm chart. Same platform, same API, same admin console, running on your infrastructure, in your VPC, with your identity provider.