Skip to content

Files API

Baponi is a sandboxed code execution platform for AI agents. The Files API lets you list, upload, download, and delete files stored in threads and volumes, the two persistent storage abstractions in Baponi. All file transfers use signed URLs that point directly to cloud storage (GCS, S3, or Azure depending on deployment), so no file data passes through Baponi’s servers. Maximum file size: 10 GiB.

Base URL: https://api.baponi.ai

All requests require an API key in the Authorization header:

Authorization: Bearer sk-us-YOUR_API_KEY

Before using the Files API, understand the two storage sources:

ThreadVolume
Created byExecuting code with a thread_idAdmin console
PersistenceFiles in /home/baponi are saved between executionsAlways persistent
ScopeTied to a single thread IDShared across API keys in the organization
Access modeRead-writeRead-write or read-only (configurable)
Source parameter"thread""volume"
ID parameterThe thread_id value used during executionThe volume slug

Returns a list of files and directories at the specified path. Use the path parameter to filter by prefix, for example, path: "data/" returns only files under the data/ directory.

ParameterTypeRequiredDescription
sourcestringYes"thread" or "volume"
idstringYesThread ID or volume slug
pathstringNoPath prefix to filter results. Omit to list all files.
FieldTypeDescription
filesarrayList of file objects
files[].namestringFile or directory name
files[].pathstringFull path relative to the storage root
files[].sizeintegerFile size in bytes (0 for directories)
files[].modifiedstringISO 8601 timestamp of last modification
files[].is_directorybooleantrue if the entry is a directory
Terminal window
# List all files in a thread
curl -X POST https://api.baponi.ai/v1/files/list \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"source": "thread",
"id": "my-thread-123"
}'
# List files under a specific directory in a volume
curl -X POST https://api.baponi.ai/v1/files/list \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"source": "volume",
"id": "training-data",
"path": "datasets/2026/"
}'
{
"files": [
{
"name": "data",
"path": "data",
"size": 0,
"modified": "2026-03-04T11:58:00Z",
"is_directory": true
},
{
"name": "output.csv",
"path": "data/output.csv",
"size": 2048,
"modified": "2026-03-04T12:00:00Z",
"is_directory": false
},
{
"name": "model.pkl",
"path": "model.pkl",
"size": 15728640,
"modified": "2026-03-04T12:01:30Z",
"is_directory": false
}
]
}

Generates a pre-signed URL for uploading a file directly to cloud storage. The upload bypasses Baponi’s servers entirely. Your client sends the file straight to the storage backend (GCS, S3, or Azure). Signed URLs expire after 15 minutes.

ParameterTypeRequiredDescription
sourcestringYes"thread" or "volume"
idstringYesThread ID or volume slug
pathstringYesDestination file path within the storage root
content_typestringNoMIME type of the file. Default: application/octet-stream
content_lengthintegerYesExact file size in bytes. Maximum: 10 GiB (10,737,418,240 bytes)
FieldTypeDescription
urlstringPre-signed upload URL
methodstringHTTP method to use (always "PUT")
headersobjectHeaders to include in the upload request
expires_atstringISO 8601 timestamp when the URL expires (15 minutes from creation)
Terminal window
# Step 1: Get a signed upload URL
UPLOAD_RESPONSE=$(curl -s -X POST https://api.baponi.ai/v1/files/upload_url \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"source": "thread",
"id": "my-thread-123",
"path": "input.json",
"content_type": "application/json",
"content_length": 2048
}')
# Step 2: Extract the signed URL
UPLOAD_URL=$(echo "$UPLOAD_RESPONSE" | jq -r '.url')
# Step 3: Upload the file directly to cloud storage
curl -X PUT "$UPLOAD_URL" \
-H "Content-Type: application/json" \
--data-binary @input.json
{
"url": "https://storage.googleapis.com/baponi-prod-storage/orgs/org_abc/threads/my-thread-123/input.json?X-Goog-Algorithm=...",
"method": "PUT",
"headers": {
"Content-Type": "application/json"
},
"expires_at": "2026-03-04T12:15:00Z"
}

Generates a pre-signed URL for downloading a file directly from cloud storage. Like uploads, the download bypasses Baponi’s servers. Signed URLs expire after 15 minutes.

ParameterTypeRequiredDescription
sourcestringYes"thread" or "volume"
idstringYesThread ID or volume slug
pathstringYesFile path to download
FieldTypeDescription
urlstringPre-signed download URL
methodstringHTTP method to use (always "GET")
expires_atstringISO 8601 timestamp when the URL expires (15 minutes from creation)
Terminal window
# Step 1: Get a signed download URL
DOWNLOAD_RESPONSE=$(curl -s -X POST https://api.baponi.ai/v1/files/download_url \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"source": "thread",
"id": "my-thread-123",
"path": "output.csv"
}')
# Step 2: Download the file directly from cloud storage
DOWNLOAD_URL=$(echo "$DOWNLOAD_RESPONSE" | jq -r '.url')
curl -o output.csv "$DOWNLOAD_URL"
{
"url": "https://storage.googleapis.com/baponi-prod-storage/orgs/org_abc/threads/my-thread-123/output.csv?X-Goog-Algorithm=...",
"method": "GET",
"expires_at": "2026-03-04T12:15:00Z"
}

Permanently deletes a file from a thread or volume. This action cannot be undone. Attempting to delete from a read-only volume returns 403 Forbidden.

ParameterTypeRequiredDescription
sourcestringYes"thread" or "volume"
idstringYesThread ID or volume slug
pathstringYesFile path to delete
FieldTypeDescription
deletedbooleantrue if the file was deleted
Terminal window
curl -X DELETE https://api.baponi.ai/v1/files \
-H "Authorization: Bearer $BAPONI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"source": "thread",
"id": "my-thread-123",
"path": "output.csv"
}'
{
"deleted": true
}

All Files API endpoints return errors in the standard Baponi format. See the API Overview for the full error response structure.

StatusError CodeWhen
400validation_errorMissing required parameters, invalid source value, or content_length exceeds 10 GiB
401unauthorizedMissing or invalid API key
403forbiddenAttempting to write to or delete from a read-only volume
404not_foundThread, volume, or file path does not exist
429rate_limitedToo many requests. See rate limiting.
{
"error": "validation_error",
"message": "content_length exceeds maximum of 10737418240 bytes (10 GiB)"
}

Complete workflow: upload, execute, download

Section titled “Complete workflow: upload, execute, download”

This Python example demonstrates a full lifecycle: upload an input file, execute code that reads it, then download the output.

import requests
API_KEY = "sk-us-YOUR_API_KEY"
BASE_URL = "https://api.baponi.ai"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
THREAD_ID = "analysis-run-42"
# --- Step 1: Upload input data to the thread ---
input_data = b'{"values": [1, 2, 3, 4, 5]}'
upload_resp = requests.post(
f"{BASE_URL}/v1/files/upload_url",
headers=HEADERS,
json={
"source": "thread",
"id": THREAD_ID,
"path": "input.json",
"content_type": "application/json",
"content_length": len(input_data),
},
).json()
requests.put(
upload_resp["url"],
headers=upload_resp["headers"],
data=input_data,
).raise_for_status()
print("Uploaded input.json")
# --- Step 2: Execute code that reads input and writes output ---
exec_resp = requests.post(
f"{BASE_URL}/v1/sandbox/execute",
headers={**HEADERS, "Content-Type": "application/json"},
json={
"code": """
import json
with open('/home/baponi/input.json') as f:
data = json.load(f)
result = {"sum": sum(data["values"]), "count": len(data["values"])}
with open('/home/baponi/output.json', 'w') as f:
json.dump(result, f)
print(f"Processed {result['count']} values, sum = {result['sum']}")
""",
"language": "python",
"thread_id": THREAD_ID,
},
).json()
print(f"Execution stdout: {exec_resp['stdout']}")
# --- Step 3: Download the output ---
download_resp = requests.post(
f"{BASE_URL}/v1/files/download_url",
headers=HEADERS,
json={
"source": "thread",
"id": THREAD_ID,
"path": "output.json",
},
).json()
output = requests.get(download_resp["url"]).json()
print(f"Result: {output}")
# Result: {"sum": 15, "count": 5}
# --- Step 4: Verify files are persisted ---
list_resp = requests.post(
f"{BASE_URL}/v1/files/list",
headers=HEADERS,
json={"source": "thread", "id": THREAD_ID},
).json()
for f in list_resp["files"]:
print(f" {f['path']} ({f['size']} bytes)")
# input.json (27 bytes)
# output.json (25 bytes)

Baponi uses pre-signed URLs for all file transfers. This design has three benefits:

  1. No data through Baponi servers. File bytes travel directly between your client and the storage backend (GCS, S3, or Azure). Baponi only generates the signed URL, which is a lightweight API call.
  2. 10 GiB file support. Because files stream directly to cloud storage, there is no proxy payload limit. Upload files up to 10 GiB per request.
  3. Enterprise data residency. Self-hosted Baponi deployments generate signed URLs for the customer’s own storage buckets. Data never leaves the customer’s cloud account.

Signed URLs expire after 15 minutes. If an upload or download is interrupted, request a new URL and retry. Partial uploads do not consume storage. Cloud storage providers discard incomplete uploads automatically.