Documentation Index
Fetch the complete documentation index at: https://avala.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
The Python SDK requires Python 3.9+.
Installation
Quick Start
from avala import Client
client = Client(api_key="your-api-key")
# List all datasets
datasets = client.datasets.list()
for dataset in datasets:
print(dataset.name, dataset.uid)
Create an Account
The signup function creates a new Avala account and returns an API key. It does not require authentication.
from avala import signup
result = signup(
email="dev@acme.com",
password="SecurePass123!",
first_name="Jane", # optional
last_name="Doe", # optional
)
print(f"User: {result.user.email}")
print(f"API Key: {result.api_key}")
An async variant is also available:
from avala import async_signup
result = await async_signup(email="dev@acme.com", password="SecurePass123!")
Authentication
The SDK authenticates using your Avala API key, which is sent via the X-Avala-Api-Key header on every request.
You can provide the key directly or let the SDK read it from the environment.
Option 1: Pass the key directly
from avala import Client
client = Client(api_key="your-api-key")
Option 2: Use an environment variable
export AVALA_API_KEY="your-api-key"
from avala import Client
# Automatically reads AVALA_API_KEY from the environment
client = Client()
Async Support
The SDK ships with a fully async client built on top of httpx. Use AsyncClient for non-blocking I/O in async applications.
import asyncio
from avala import AsyncClient
async def main():
client = AsyncClient(api_key="your-api-key")
datasets = await client.datasets.list()
for dataset in datasets:
print(dataset.name)
# Always close the client when done, or use it as a context manager
await client.close()
asyncio.run(main())
Using the async context manager:
import asyncio
from avala import AsyncClient
async def main():
async with AsyncClient() as client:
datasets = await client.datasets.list()
for dataset in datasets:
print(dataset.name)
asyncio.run(main())
Working with Datasets
The Python SDK ships an avala datasets upload CLI command for uploading local files. See Upload local files for the end-to-end flow, including the per-user 10 GB storage cap.
List Datasets
datasets = client.datasets.list()
for dataset in datasets:
print(f"{dataset.name} ({dataset.uid})")
print(f" Items: {dataset.item_count}")
print(f" Created: {dataset.created_at}")
Get a Dataset
dataset = client.datasets.get("550e8400-e29b-41d4-a716-446655440000")
print(dataset.name)
print(dataset.slug)
print(dataset.item_count)
Working with Projects
List Projects
projects = client.projects.list()
for project in projects:
print(f"{project.name} ({project.uid})")
print(f" Status: {project.status}")
print(f" Created: {project.created_at}")
Get a Project
project = client.projects.get("770a9600-a40d-63f6-c938-668877660000")
print(project.name)
print(project.status)
Working with Tasks
List Tasks
tasks = client.tasks.list(project="770a9600-a40d-63f6-c938-668877660000", status="pending")
for task in tasks:
print(f"{task.uid} — {task.name} ({task.status})")
Get a Task
task = client.tasks.get("990c1800-b62f-85a8-e150-880099880000")
print(task.name)
print(task.status)
Working with Exports
The Python SDK supports full CRUD operations for agents, webhooks, storage configurations, inference providers, quality targets, consensus config, and organizations. Datasets support upload via the avala datasets upload CLI command. Projects remain read-only — use the REST API for project mutations.
Create an Export
export = client.exports.create(project="770a9600-a40d-63f6-c938-668877660000")
print(f"Export started: {export.uid}")
print(f"Status: {export.status}")
Poll for Completion
import time
export = client.exports.create(project="770a9600-a40d-63f6-c938-668877660000")
while export.status != "completed":
time.sleep(2)
export = client.exports.get(export.uid)
print(f"Status: {export.status}")
print(f"Download: {export.download_url}")
Working with Organizations
List Organizations
orgs = client.organizations.list()
for org in orgs:
print(f"{org.name} ({org.slug})")
Create an Organization
org = client.organizations.create(name="My Team", visibility="private", industry="technology")
print(f"Created: {org.name} ({org.uid})")
Manage Members
members = client.organizations.list_members("my-team")
for member in members:
print(f"{member.full_name} - {member.role}")
Working with Slices
List Slices
slices = client.slices.list("my-org")
for s in slices:
print(f"{s.name}: {s.item_count} items")
Browsing Dataset Items
List Items in a Dataset
items = client.datasets.list_items("my-org", "my-dataset")
for item in items:
print(f"{item.uid}: {item.key}")
List Sequences
sequences = client.datasets.list_sequences("my-org", "my-dataset")
for seq in sequences:
print(f"{seq.uid}: {seq.key} ({seq.number_of_frames} frames)")
Get a Sequence
seq = client.datasets.get_sequence("my-org", "my-dataset", sequence_uid)
print(f"{seq.key}: {len(seq.frames or [])} frames, status={seq.status}")
Validating a Dataset After Ingest
The SDK ships typed helpers for post-ingest validation — useful for scripts or AI agents that need to verify an upload without opening Mission Control.
Dataset Health
health = client.datasets.get_health("my-org", "my-dataset")
assert health.ingest_ok, f"Ingest issues: {health.issues}"
print(f"Frames: {health.total_frames}, sequences: {health.sequence_count}")
for seq in health.sequences:
print(f" {seq.key}: {seq.frame_count} frames, lidar_calib={seq.has_lidar_calibration}")
Inspect a Single Frame
frame = client.datasets.get_frame("my-org", "my-dataset", sequence_uid, frame_idx=0)
print(f"Model: {frame.model}, xi: {frame.xi}, alpha: {frame.alpha}")
print(f"Device position: {frame.device_position}")
for image in frame.images or []:
print(f" camera fx={image.fx} fy={image.fy} cx={image.cx} cy={image.cy}")
Read Calibration
calib = client.datasets.get_calibration("my-org", "my-dataset", sequence_uid)
for cam in calib.cameras:
print(f"{cam.camera_id}: {cam.model} fx={cam.fx} fy={cam.fy}")
Type Hints
The SDK is fully typed. All response objects are Pydantic models with complete type annotations, giving you autocomplete and type checking out of the box.
from avala.types import Dataset, Project, Export, Task
def process_dataset(dataset: Dataset) -> None:
print(dataset.name) # str
print(dataset.uid) # str
print(dataset.item_count) # int
print(dataset.created_at) # Optional[datetime]
def process_task(task: Task) -> None:
print(task.uid) # str
print(task.name) # Optional[str]
print(task.status) # Optional[str]
print(task.project) # Optional[str]
Error Handling
The SDK raises typed exceptions so you can handle different failure modes precisely.
from avala import Client
from avala.errors import (
AvalaError,
ForbiddenError,
NotFoundError,
RateLimitError,
ValidationError,
)
client = Client()
try:
dataset = client.datasets.get("nonexistent")
except ForbiddenError as e:
print(f"Permission denied: {e.message}")
# e.body contains structured error code if available
# e.g. {"code": "plan_insufficient", "doc_url": "..."}
except NotFoundError as e:
print(f"Dataset not found: {e.message}")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds.")
except ValidationError as e:
print(f"Invalid request: {e.message}")
for detail in e.details:
print(f" - {detail}")
except AvalaError as e:
# Catch-all for any other Avala API error
print(f"API error ({e.status_code}): {e.message}")
| Exception | Description |
|---|
AvalaError | Base exception for all Avala API errors. |
AuthenticationError | Invalid or missing API key (HTTP 401). |
ForbiddenError | Insufficient permissions, wrong scope, or plan level (HTTP 403). Check e.body for structured error codes. |
NotFoundError | The requested resource does not exist (HTTP 404). |
RateLimitError | You have exceeded the API rate limit (HTTP 429). Includes a retry_after attribute. |
ValidationError | The request payload failed validation (HTTP 400/422). Includes a details attribute with field-level errors. |
ServerError | The server returned an internal error (HTTP 5xx). |
Polling with datasets.wait()
After uploading a dataset, use wait() to block until it finishes processing:
# Upload a dataset, then wait for it to be ready
dataset = client.datasets.create(name="my-dataset", data_type="images")
# ... upload files ...
# Block until status is "created" (processing complete)
ready = client.datasets.wait(
dataset.uid,
status="created", # Target status (default)
interval=10.0, # Seconds between polls (default 10, minimum 1)
timeout=3600.0, # Maximum wait in seconds (default 1 hour)
)
print(f"Dataset ready: {ready.item_count} items")
wait() polls the dataset status at the specified interval. If the dataset does not reach the target status before the timeout, it raises TimeoutError.
An async variant is also available:
ready = await async_client.datasets.wait(dataset.uid, timeout=600)
List methods return a CursorPage object. You can iterate through items directly or control pagination manually.
# Iterate through items on the current page
for dataset in client.datasets.list():
print(dataset.name)
# Manual pagination — control page size and access cursor
page = client.datasets.list(limit=10)
for dataset in page.items:
print(dataset.name)
# Fetch the next page
if page.has_more:
next_page = client.datasets.list(limit=10, cursor=page.next_cursor)
Fleet Management
Fleet Management is in preview. APIs described here may change.
The fleet namespace provides access to device registry, recordings, events, rules, and alerts.
# List online devices
devices = client.fleet.devices.list(status="online")
# Create a timeline event on a recording
event = client.fleet.events.create(
recording_id="rec_abc123",
timestamp="2026-01-15T10:30:00Z",
type="anomaly",
label="Gripper force spike",
metadata={"force_n": 45.2}
)
# Create a recording rule
rule = client.fleet.rules.create(
name="High Latency Alert",
condition={"type": "threshold", "topic": "/diagnostics/latency", "field": "data.value", "operator": "gt", "value": 100},
actions=[{"type": "tag", "value": "high-latency"}, {"type": "notify", "channel_id": "ch_your_channel_id"}]
)
See the Fleet Dashboard guide for complete examples.
Configuration
You can customize the client behavior at initialization time.
from avala import Client
client = Client(
api_key="your-api-key",
base_url="https://api.avala.ai/api/v1", # Default
timeout=60, # Request timeout in seconds (default: 30)
max_retries=3, # Number of retries on transient errors (default: 2)
)
| Parameter | Type | Default | Description |
|---|
api_key | str | AVALA_API_KEY env var | Your Avala API key. |
base_url | str | https://api.avala.ai/api/v1 | The API base URL. |
timeout | float | 30 | Request timeout in seconds. |
max_retries | int | 2 | Number of automatic retries on transient failures (5xx, timeouts). |