Files API
Baponi is a sandboxed code execution platform for AI agents. The Files API lets you list, upload, download, and delete files stored in threads and volumes, the two persistent storage abstractions in Baponi. All file transfers use signed URLs that point directly to cloud storage (GCS, S3, or Azure depending on deployment), so no file data passes through Baponi’s servers. Maximum file size: 10 GiB.
Base URL: https://api.baponi.ai
All requests require an API key in the Authorization header:
Authorization: Bearer sk-us-YOUR_API_KEYThreads vs volumes
Section titled “Threads vs volumes”Before using the Files API, understand the two storage sources:
| Thread | Volume | |
|---|---|---|
| Created by | Executing code with a thread_id | Admin console |
| Persistence | Files in /home/baponi are saved between executions | Always persistent |
| Scope | Tied to a single thread ID | Shared across API keys in the organization |
| Access mode | Read-write | Read-write or read-only (configurable) |
| Source parameter | "thread" | "volume" |
| ID parameter | The thread_id value used during execution | The volume slug |
List files
Section titled “List files”Returns a list of files and directories at the specified path. Use the path parameter to filter by prefix, for example, path: "data/" returns only files under the data/ directory.
Request parameters
Section titled “Request parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
source | string | Yes | "thread" or "volume" |
id | string | Yes | Thread ID or volume slug |
path | string | No | Path prefix to filter results. Omit to list all files. |
Response fields
Section titled “Response fields”| Field | Type | Description |
|---|---|---|
files | array | List of file objects |
files[].name | string | File or directory name |
files[].path | string | Full path relative to the storage root |
files[].size | integer | File size in bytes (0 for directories) |
files[].modified | string | ISO 8601 timestamp of last modification |
files[].is_directory | boolean | true if the entry is a directory |
Examples
Section titled “Examples”# List all files in a threadcurl -X POST https://api.baponi.ai/v1/files/list \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "source": "thread", "id": "my-thread-123" }'
# List files under a specific directory in a volumecurl -X POST https://api.baponi.ai/v1/files/list \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "source": "volume", "id": "training-data", "path": "datasets/2026/" }'import requests
response = requests.post( "https://api.baponi.ai/v1/files/list", headers={"Authorization": f"Bearer {api_key}"}, json={ "source": "thread", "id": "my-thread-123", },)files = response.json()["files"]
for f in files: if not f["is_directory"]: print(f"{f['path']} ({f['size']} bytes)")Response
Section titled “Response”{ "files": [ { "name": "data", "path": "data", "size": 0, "modified": "2026-03-04T11:58:00Z", "is_directory": true }, { "name": "output.csv", "path": "data/output.csv", "size": 2048, "modified": "2026-03-04T12:00:00Z", "is_directory": false }, { "name": "model.pkl", "path": "model.pkl", "size": 15728640, "modified": "2026-03-04T12:01:30Z", "is_directory": false } ]}Generate upload URL
Section titled “Generate upload URL”Generates a pre-signed URL for uploading a file directly to cloud storage. The upload bypasses Baponi’s servers entirely. Your client sends the file straight to the storage backend (GCS, S3, or Azure). Signed URLs expire after 15 minutes.
Request parameters
Section titled “Request parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
source | string | Yes | "thread" or "volume" |
id | string | Yes | Thread ID or volume slug |
path | string | Yes | Destination file path within the storage root |
content_type | string | No | MIME type of the file. Default: application/octet-stream |
content_length | integer | Yes | Exact file size in bytes. Maximum: 10 GiB (10,737,418,240 bytes) |
Response fields
Section titled “Response fields”| Field | Type | Description |
|---|---|---|
url | string | Pre-signed upload URL |
method | string | HTTP method to use (always "PUT") |
headers | object | Headers to include in the upload request |
expires_at | string | ISO 8601 timestamp when the URL expires (15 minutes from creation) |
Examples
Section titled “Examples”# Step 1: Get a signed upload URLUPLOAD_RESPONSE=$(curl -s -X POST https://api.baponi.ai/v1/files/upload_url \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "source": "thread", "id": "my-thread-123", "path": "input.json", "content_type": "application/json", "content_length": 2048 }')
# Step 2: Extract the signed URLUPLOAD_URL=$(echo "$UPLOAD_RESPONSE" | jq -r '.url')
# Step 3: Upload the file directly to cloud storagecurl -X PUT "$UPLOAD_URL" \ -H "Content-Type: application/json" \ --data-binary @input.jsonimport requestsfrom pathlib import Path
file_path = Path("input.json")file_size = file_path.stat().st_sizecontent_type = "application/json"
# Step 1: Get a signed upload URL from Baponiresponse = requests.post( "https://api.baponi.ai/v1/files/upload_url", headers={"Authorization": f"Bearer {api_key}"}, json={ "source": "thread", "id": "my-thread-123", "path": "input.json", "content_type": content_type, "content_length": file_size, },)upload_info = response.json()
# Step 2: Upload the file directly to cloud storagewith open(file_path, "rb") as f: upload_response = requests.put( upload_info["url"], headers=upload_info["headers"], data=f, ) upload_response.raise_for_status()
print(f"Uploaded {file_size} bytes, URL expires at {upload_info['expires_at']}")Response
Section titled “Response”{ "url": "https://storage.googleapis.com/baponi-prod-storage/orgs/org_abc/threads/my-thread-123/input.json?X-Goog-Algorithm=...", "method": "PUT", "headers": { "Content-Type": "application/json" }, "expires_at": "2026-03-04T12:15:00Z"}Generate download URL
Section titled “Generate download URL”Generates a pre-signed URL for downloading a file directly from cloud storage. Like uploads, the download bypasses Baponi’s servers. Signed URLs expire after 15 minutes.
Request parameters
Section titled “Request parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
source | string | Yes | "thread" or "volume" |
id | string | Yes | Thread ID or volume slug |
path | string | Yes | File path to download |
Response fields
Section titled “Response fields”| Field | Type | Description |
|---|---|---|
url | string | Pre-signed download URL |
method | string | HTTP method to use (always "GET") |
expires_at | string | ISO 8601 timestamp when the URL expires (15 minutes from creation) |
Examples
Section titled “Examples”# Step 1: Get a signed download URLDOWNLOAD_RESPONSE=$(curl -s -X POST https://api.baponi.ai/v1/files/download_url \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "source": "thread", "id": "my-thread-123", "path": "output.csv" }')
# Step 2: Download the file directly from cloud storageDOWNLOAD_URL=$(echo "$DOWNLOAD_RESPONSE" | jq -r '.url')curl -o output.csv "$DOWNLOAD_URL"import requests
# Step 1: Get a signed download URL from Baponiresponse = requests.post( "https://api.baponi.ai/v1/files/download_url", headers={"Authorization": f"Bearer {api_key}"}, json={ "source": "thread", "id": "my-thread-123", "path": "output.csv", },)download_info = response.json()
# Step 2: Download the file directly from cloud storagefile_response = requests.get(download_info["url"])file_response.raise_for_status()
with open("output.csv", "wb") as f: f.write(file_response.content)
print(f"Downloaded {len(file_response.content)} bytes")Response
Section titled “Response”{ "url": "https://storage.googleapis.com/baponi-prod-storage/orgs/org_abc/threads/my-thread-123/output.csv?X-Goog-Algorithm=...", "method": "GET", "expires_at": "2026-03-04T12:15:00Z"}Delete a file
Section titled “Delete a file”Permanently deletes a file from a thread or volume. This action cannot be undone. Attempting to delete from a read-only volume returns 403 Forbidden.
Request parameters
Section titled “Request parameters”| Parameter | Type | Required | Description |
|---|---|---|---|
source | string | Yes | "thread" or "volume" |
id | string | Yes | Thread ID or volume slug |
path | string | Yes | File path to delete |
Response fields
Section titled “Response fields”| Field | Type | Description |
|---|---|---|
deleted | boolean | true if the file was deleted |
Examples
Section titled “Examples”curl -X DELETE https://api.baponi.ai/v1/files \ -H "Authorization: Bearer $BAPONI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "source": "thread", "id": "my-thread-123", "path": "output.csv" }'import requests
response = requests.delete( "https://api.baponi.ai/v1/files", headers={"Authorization": f"Bearer {api_key}"}, json={ "source": "thread", "id": "my-thread-123", "path": "output.csv", },)result = response.json()print(f"Deleted: {result['deleted']}")Response
Section titled “Response”{ "deleted": true}Error responses
Section titled “Error responses”All Files API endpoints return errors in the standard Baponi format. See the API Overview for the full error response structure.
| Status | Error Code | When |
|---|---|---|
400 | validation_error | Missing required parameters, invalid source value, or content_length exceeds 10 GiB |
401 | unauthorized | Missing or invalid API key |
403 | forbidden | Attempting to write to or delete from a read-only volume |
404 | not_found | Thread, volume, or file path does not exist |
429 | rate_limited | Too many requests. See rate limiting. |
Example error
Section titled “Example error”{ "error": "validation_error", "message": "content_length exceeds maximum of 10737418240 bytes (10 GiB)"}Complete workflow: upload, execute, download
Section titled “Complete workflow: upload, execute, download”This Python example demonstrates a full lifecycle: upload an input file, execute code that reads it, then download the output.
import requests
API_KEY = "sk-us-YOUR_API_KEY"BASE_URL = "https://api.baponi.ai"HEADERS = {"Authorization": f"Bearer {API_KEY}"}THREAD_ID = "analysis-run-42"
# --- Step 1: Upload input data to the thread ---input_data = b'{"values": [1, 2, 3, 4, 5]}'
upload_resp = requests.post( f"{BASE_URL}/v1/files/upload_url", headers=HEADERS, json={ "source": "thread", "id": THREAD_ID, "path": "input.json", "content_type": "application/json", "content_length": len(input_data), },).json()
requests.put( upload_resp["url"], headers=upload_resp["headers"], data=input_data,).raise_for_status()print("Uploaded input.json")
# --- Step 2: Execute code that reads input and writes output ---exec_resp = requests.post( f"{BASE_URL}/v1/sandbox/execute", headers={**HEADERS, "Content-Type": "application/json"}, json={ "code": """import json
with open('/home/baponi/input.json') as f: data = json.load(f)
result = {"sum": sum(data["values"]), "count": len(data["values"])}
with open('/home/baponi/output.json', 'w') as f: json.dump(result, f)
print(f"Processed {result['count']} values, sum = {result['sum']}")""", "language": "python", "thread_id": THREAD_ID, },).json()print(f"Execution stdout: {exec_resp['stdout']}")
# --- Step 3: Download the output ---download_resp = requests.post( f"{BASE_URL}/v1/files/download_url", headers=HEADERS, json={ "source": "thread", "id": THREAD_ID, "path": "output.json", },).json()
output = requests.get(download_resp["url"]).json()print(f"Result: {output}")# Result: {"sum": 15, "count": 5}
# --- Step 4: Verify files are persisted ---list_resp = requests.post( f"{BASE_URL}/v1/files/list", headers=HEADERS, json={"source": "thread", "id": THREAD_ID},).json()
for f in list_resp["files"]: print(f" {f['path']} ({f['size']} bytes)")# input.json (27 bytes)# output.json (25 bytes)Signed URL architecture
Section titled “Signed URL architecture”Baponi uses pre-signed URLs for all file transfers. This design has three benefits:
- No data through Baponi servers. File bytes travel directly between your client and the storage backend (GCS, S3, or Azure). Baponi only generates the signed URL, which is a lightweight API call.
- 10 GiB file support. Because files stream directly to cloud storage, there is no proxy payload limit. Upload files up to 10 GiB per request.
- Enterprise data residency. Self-hosted Baponi deployments generate signed URLs for the customer’s own storage buckets. Data never leaves the customer’s cloud account.
Signed URLs expire after 15 minutes. If an upload or download is interrupted, request a new URL and retry. Partial uploads do not consume storage. Cloud storage providers discard incomplete uploads automatically.