---
title: "Quickstart: First Execution in 60 Seconds"
description: "Get from zero to running code in a Baponi sandbox in under 60 seconds. Sign up, create an API key, and execute your first Python, Node.js, or Bash command via curl, Python SDK, or MCP."
url: https://baponi.ai/docs/getting-started/quickstart
lastUpdated: 2026-03-16
---
# Quickstart: First Execution in 60 Seconds
Baponi is a sandboxed code execution platform for AI agents. Send code, get results. No containers to manage, no infrastructure to provision. This guide gets you from zero to your first execution in under 60 seconds.

**What you'll need:** A web browser and a terminal (or Python installed).

## 3 steps to your first execution

1. **Sign up at [console.baponi.ai](https://console.baponi.ai)**

   Create a free account. No credit card required. You get 1,000 free credits per month, unlimited team seats, and audit logs on every tier.

2. **Create an API key**

   In the admin console, go to **API Keys** → **Create API Key**. Give it a name and copy the key. It starts with `sk-` and looks like `sk-us-...`.

   
   Copy the key now. You won't be able to see it again. Store it somewhere safe.
   

3. **Run your first code**

   
   
   ```bash
   curl -X POST https://api.baponi.ai/v1/sandbox/execute \
     -H "Authorization: Bearer YOUR_API_KEY" \
     -H "Content-Type: application/json" \
     -d '{
       "code": "print(\"Hello from Baponi!\")",
       "language": "python"
     }'
   ```
   
   
   ```bash
   pip install baponi
   ```

   ```python
   from baponi import Baponi

   client = Baponi(api_key="YOUR_API_KEY")
   result = client.execute("print('Hello from Baponi!')")
   print(result.stdout)  # Hello from Baponi!
   ```

   Or set the `BAPONI_API_KEY` environment variable and skip the `api_key` parameter:

   ```bash
   export BAPONI_API_KEY="YOUR_API_KEY"
   ```

   ```python
   from baponi import Baponi

   client = Baponi()  # reads BAPONI_API_KEY from env
   result = client.execute("print('Hello from Baponi!')")
   ```
   
   
   ```python
   import requests

   result = requests.post(
       "https://api.baponi.ai/v1/sandbox/execute",
       headers={"Authorization": "Bearer YOUR_API_KEY"},
       json={"code": "print('Hello from Baponi!')", "language": "python"},
   ).json()

   print(result["stdout"])  # Hello from Baponi!
   ```
   
   

That's it. Your code ran in a fully isolated sandbox with multi-layer security. The sandbox booted in under 20ms, ran your code, and returned the result.

## The response

Every execution returns the same structure:

```json
{
  "success": true,
  "stdout": "Hello from Baponi!\n",
  "stderr": "",
  "exit_code": 0
}
```

| Field | Description |
|-------|-------------|
| `success` | `true` if exit code is 0 |
| `stdout` | Standard output from your code |
| `stderr` | Standard error from your code |
| `exit_code` | Process exit code (0 = success) |

## Try all 3 languages

Baponi supports Python 3.14, Node.js 25, and Bash out of the box.

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "import sys; print(f\"Python {sys.version}\")", "language": "python"}'
```

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "console.log(`Node.js ${process.version}`)", "language": "node"}'
```

```bash
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"code": "echo \"Bash $BASH_VERSION\" && uname -a", "language": "bash"}'
```

Need a custom runtime? Import any OCI-compatible image through the admin console. Baponi auto-discovers available interpreters and system libraries.

## Persist state between calls with thread_id

By default, every execution is fully ephemeral. Add `thread_id` to keep the home directory (`/home/baponi`) across calls. Installed packages, working files, and in-progress work all persist:

```bash
# Call 1: install cowsay
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "pip install --user cowsay && echo done",
    "language": "bash",
    "thread_id": "quickstart-demo-1"
  }'

# Call 2: cowsay is already installed
curl -X POST https://api.baponi.ai/v1/sandbox/execute \
  -H "Authorization: Bearer $BAPONI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "code": "import cowsay; cowsay.cow(\"Hello from Baponi!\")",
    "language": "python",
    "thread_id": "quickstart-demo-1"
  }'
```

Between calls, nothing is running. Baponi saves only the diff to cloud storage and restores it on the next call. No idle billing. Pick it up minutes or days later.

## Connect to Claude, Cursor, or any MCP client

Baponi is a native [MCP (Model Context Protocol)](https://modelcontextprotocol.io/) server. If your tool supports MCP, add Baponi with a URL and API key. No SDK, no tool definitions, no glue code.

Add to your `claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "baponi": {
      "url": "https://api.baponi.ai/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
```

```bash
claude mcp add baponi \
  --transport http \
  --url https://api.baponi.ai/mcp \
  --header "Authorization: Bearer YOUR_API_KEY"
```

Add to your Cursor MCP settings (`.cursor/mcp.json`):

```json
{
  "mcpServers": {
    "baponi": {
      "url": "https://api.baponi.ai/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}
```

Once configured, the MCP client auto-discovers Baponi's tools (`sandbox_execute`, `files_download`). Ask Claude or Cursor to write and run code, and it calls Baponi automatically.

## Give Claude code execution with 4 lines of Python

Using the Python SDK with Anthropic's `tool_runner`:

```python
from anthropic import Anthropic
from baponi.anthropic import code_sandbox

client = Anthropic()
response = client.beta.messages.tool_runner(
    model="claude-sonnet-4-6",
    tools=[code_sandbox],
    messages=[{"role": "user", "content": "Calculate the first 20 Fibonacci numbers"}],
).get_final_response()

print(response.content[0].text)
```

```bash
pip install baponi[anthropic]
```

Claude writes Python, Baponi runs it in a sandbox, Claude reads the result. The same pattern works with OpenAI, Gemini, LangChain, and CrewAI:

```python
from baponi.openai import code_sandbox     # OpenAI Agents SDK
from baponi.google import code_sandbox     # Google Gemini
from baponi.langchain import code_sandbox  # LangChain
from baponi.crewai import code_sandbox     # CrewAI
```

## API parameters at a glance

| Parameter | Required | Default | Description |
|-----------|----------|---------|-------------|
| `code` | Yes | - | Code to execute (1 byte to 1 MB) |
| `language` | No | `python` | `python`, `node`, or `bash` |
| `timeout` | No | `60` | Max execution time in seconds (1-3600, plan-dependent) |
| `thread_id` | No | - | Persist `/home/baponi` across calls |
| `metadata` | No | - | Key-value pairs for audit logging (not sent to sandbox) |
| `env_vars` | No | - | Environment variables injected into the sandbox |

One required parameter. Everything else (the runtime image, CPU, RAM, network policy, storage mounts, injected credentials) is configured once in the admin console and attached to your API key. See the [full Execute API reference](/docs/api/execute.md) for all parameters, response fields, error codes, and worked examples.

## What to explore next

- **[How Baponi Works](/docs/getting-started/how-it-works.md)**: deep dive into the execution model, BYOB storage, connectors, and the full API surface
- **[API Reference](/docs/api/overview.md)**: full request/response reference for [Execute](/docs/api/execute.md), [Files](/docs/api/files.md), and [MCP](/docs/api/mcp.md)
- **[Admin Console](https://console.baponi.ai)**: configure sandboxes, storage connections, connectors, and API keys
- **[Pricing](/pricing)**: 1,000 free credits/month, unlimited seats, no credit card required

## FAQ

**How many free credits do I get?**
1,000 credits per month on the free tier. One credit = 60 seconds of execution at 1 CPU + 1 GiB RAM. That's roughly 1,000 minutes of compute per month at the base allocation, enough for evaluation and prototyping workloads.

**What does it cost after the free tier?**
$1.00 per 1,000 additional credits. Or upgrade to Pro ($97/month) for higher resource limits (4 CPU, 4 GiB RAM), 1-hour max timeout, 100 concurrent executions, and 30-day audit retention.

**Do I need the Python SDK?**
No. The API is a single HTTP endpoint. Any language with an HTTP client works. The SDK adds convenience: automatic retries, typed responses, and ready-made LLM framework integrations.

**Is my code executed securely?**
Every execution runs in a fresh sandbox with multi-layer isolation. Sandboxes have no access to other sandboxes, the host system, or other customers' data. Baponi maintains a $10,000 bug bounty program.

**Can I self-host Baponi?**
Yes. Baponi deploys to any Kubernetes cluster with a single Helm chart. Same platform, same API, same admin console, running on your infrastructure, in your VPC, with your identity provider.