Skip to main content
This page covers the building blocks of the Avala platform: visualization, annotation, datasets, projects, tasks, organizations, labels, quality control, and sequences. Understanding these concepts will help you design effective data workflows for physical AI.

Visualization

Avala provides GPU-accelerated visualization for sensor data directly in the browser. These concepts apply across all visualization features.

Viewers

Avala includes specialized viewers for different data types. The multi-sensor viewer handles MCAP and ROS recordings with synchronized playback across all sensor streams. The 3D point cloud viewer renders LiDAR data with six visualization modes. The Gaussian Splat viewer renders photorealistic 3D scene reconstructions using WebGPU.

Panels

The multi-sensor viewer organizes data into panels — independent visualization windows for different data streams. Avala supports eight panel types:
Panel TypeDescription
ImageCamera frames and image streams
3D / Point CloudLiDAR scans and 3D geometry
PlotTime-series data and numeric signals
Raw MessagesDecoded message payloads
LogTextual log streams
MapGeographic position and trajectories
GaugeReal-time numeric readouts
State TransitionsDiscrete state changes over time
Topics are automatically assigned to panels based on their schema.

Layouts

Multi-window layouts arrange panels in a configurable grid. The layout composer automatically builds optimized arrangements based on the topics in your data, or you can customize the layout manually by dragging, resizing, and rearranging panels.

Timelines

All panels in a viewer share a synchronized timeline. Navigate frame-by-frame, scrub to specific timestamps, or play back recordings at configurable speeds. The timeline keeps all sensor streams aligned regardless of their individual capture frequencies.

Sensor Streams

MCAP recordings contain multiple sensor streams (topics). Each topic carries a specific data type — images, point clouds, IMU readings, GPS coordinates — at its own frequency. Avala synchronizes all streams by timestamp so you can see the full sensor picture at any moment in time.

Visualization Modes

Point cloud data can be colored using six modes:
ModeDescription
NeutralSingle uniform color
IntensityColored by return strength
RainbowTemporal or sequential coloring
LabelColored by semantic class
PanopticColored by instance identity
Image ProjectionTextured with projected camera imagery
Visualization modes apply to the 3D point cloud viewer and work with both standalone LiDAR datasets and point cloud streams within MCAP recordings.

Datasets

A dataset is a collection of data items (images, video frames, point clouds, or multi-sensor recordings) that serve as the raw material for visualization and annotation.

Dataset Properties

PropertyDescription
nameHuman-readable name
slugURL-friendly identifier (unique within the owner’s namespace)
data_typeType of data: image, video, lidar, mcap, image_3d, splat
visibilitypublic or private
ownerUser or organization that owns the dataset
item_countTotal number of data items in the dataset

Data Items

Each dataset contains items — individual data samples:
  • Image datasets — Each item is a single image file.
  • Video datasets — Items are video frames, grouped into sequences.
  • LiDAR datasets — Items are individual point cloud scans.
  • MCAP datasets — Items contain synchronized multi-sensor frames (camera + LiDAR + IMU).

Sequences

Sequences group related items for temporal or multi-frame data:
  • Video frames from the same recording
  • LiDAR scans from a continuous driving session
  • Synchronized multi-camera captures at consecutive timestamps
Sequences enable frame-by-frame navigation, object tracking across frames, and temporal consistency in annotations. Sequence status workflow:
uploading → processing → ready → failed

Projects

A project defines an annotation workflow by connecting one or more datasets to a specific task type, label taxonomy, and quality control configuration.

Project Components

Project
├── Datasets (data sources)
├── Task Type (annotation method)
├── Label Config (object classes, attributes)
├── Quality Control (review stages, consensus)
└── Tasks (individual work units)

Task Types

Projects are configured with one of the following task types:
Task TypeAPI ValueDescription
Image Annotationimage-annotation2D annotation on single images (boxes, polygons, segmentation, keypoints)
Video Annotationvideo-annotationFrame-by-frame annotation with object tracking across frames
Point Cloud Annotationpoint-cloud-annotation3D annotation on LiDAR scans (cuboids, segmentation)
Point Cloud Objectspoint-cloud-objectsObject-level annotation in 3D point cloud sequences

Project Status

StatusDescription
pending-approvalAwaiting approval to start
activeAccepting annotation work
pausedTemporarily halted
canceledPermanently stopped
archivedCompleted and archived
completedAll annotation tasks have been completed

Tasks

A task is an individual work unit within a project. Each task represents annotation work to be done on one or more data items by a single annotator.

Task Lifecycle

Tasks progress through the following states:
pending → assigned → in_progress → submitted → under_review → approved
                                                             → rejected → rework
StatusDescription
pendingCreated but not yet assigned to an annotator
assignedAssigned to an annotator, waiting for them to start
in_progressAnnotator is actively working on the task
submittedAnnotator has submitted their work for review
under_reviewA reviewer is examining the submitted annotations
approvedAnnotations accepted — task is complete
rejectedAnnotations did not pass review
reworkReturned to the annotator for corrections

Results

When an annotator completes a task, they submit a result containing:
  • The annotation data (bounding boxes, polygons, cuboids, segmentation masks, etc.)
  • Metadata (time spent, tool versions)
Results go through quality control review before final acceptance.

Organizations

An organization groups users and resources for team-based collaboration.

Organization Structure

Organization
├── Members (users with roles)
├── Datasets (shared data)
├── Projects (shared workflows)
└── Settings (billing, API keys, permissions)

Member Roles

RoleCapabilities
ownerFull control — billing, settings, can delete the organization
adminManage members, create and configure resources
memberAccess shared resources, perform annotation work

Labels and Taxonomy

Label Config

Projects define a label config — a set of predefined object classes that annotators assign to annotations:
{
  "labels": [
    { "name": "car", "color": "#FF0000" },
    { "name": "pedestrian", "color": "#00FF00" },
    { "name": "cyclist", "color": "#0000FF" }
  ]
}

Classification

For more complex taxonomies, projects can include classification configs that define:
  • Attributes — Properties like color, occlusion level, or truncation that annotators assign to each object.
  • Hierarchical categories — Nested class structures (e.g., Vehicle > Car > Sedan).
  • Conditional attributes — Attributes that only appear for specific object classes.

Annotation Types

Avala supports the following annotation types, each designed for specific labeling tasks:
TypeDescriptionData Types
Bounding Box2D rectangular region around an objectImages, Video
PolygonArbitrary closed shape tracing object boundariesImages, Video
3D Cuboid3D bounding box with position, dimensions, and rotationPoint Clouds, MCAP
SegmentationPixel-level classification maskImages, Video
PolylineOpen path for lanes, edges, and boundariesImages, Video
KeypointsLandmark points for pose estimation and structureImages, Video
ClassificationScene-level or object-level categorical labelsAll data types

Quality Control

Avala provides built-in quality assurance tools to ensure annotation accuracy and consistency.

Reviews

Annotations go through a review stage before acceptance:
  1. Annotator submits their result.
  2. A reviewer examines the annotations.
  3. The reviewer approves correct work or rejects work that needs correction.
  4. Rejected tasks return to the annotator for rework.

Issues

Annotation issues let reviewers flag specific problems on individual annotations:
  • Pin an issue to a specific object or region in the scene.
  • Assign issues to team members for resolution.
  • Track issue status (open, resolved).

Metrics

Monitor annotation quality with built-in metrics:
  • Acceptance rate — Percentage of tasks approved on first submission.
  • Annotation time — Average time spent per task.
  • Inter-annotator agreement — Consistency across annotators on the same data.
  • Issue frequency — Rate of flagged problems per task.

Consensus

Consensus workflows assign the same data to multiple annotators independently, then compare results to measure agreement and identify ambiguous cases.
Quality metrics help identify training needs and maintain consistent annotation standards across your team.

Sequences

Sequences are ordered collections of data items that represent temporal or spatial progressions — video frames, LiDAR sweeps, or multi-sensor recordings.

Properties

PropertyDescription
nameSequence identifier
frame_countNumber of frames in the sequence
statusProcessing status of the sequence
data_typeInherited from the parent dataset

Status Workflow

Sequences follow this status progression as data is uploaded and processed:
uploading → processing → ready
                       → failed
  • uploading — Frames are being uploaded to the platform.
  • processing — Frames are being validated and prepared for annotation.
  • ready — All frames are processed and available for annotation.
  • failed — Processing encountered an error (check individual frame statuses).

Fleet Management

Fleet Management is in preview. Features described here may change.
Avala’s fleet management capabilities let you manage devices, recordings, and telemetry across robot fleets at scale.

Devices

A device represents a physical robot, sensor rig, or compute unit in your fleet. Each device has a unique dev_ prefixed identifier and tracks metadata like type, firmware version, and status (online, offline, maintenance).

Recordings

Devices produce recordings — MCAP files captured during operation. Recordings are automatically associated with their source device and can be filtered by device, date, status, and tags.

Events

Events are timestamped markers on recordings: errors, state changes, anomalies, and custom annotations. Events appear on the MCAP viewer timeline and can be queried across the fleet.

Recording Rules

Recording rules automatically evaluate recordings against conditions and take actions (tag, flag for review, notify) when matches occur. Rules can trigger on thresholds, patterns, frequencies, or data absence.

Alerts

Alerts notify your team when fleet conditions change. Route alerts to Slack, email, webhooks, or in-app notifications. Alerts follow a lifecycle: openacknowledgedresolved.

Next Steps

Data Types

Supported formats, visualization capabilities, and annotation tools for each data type.

Annotation

Learn the web interface for visualization, annotation, and project management.

Architecture

How the Avala platform components fit together, including the visualization engine.

API Authentication

Set up API keys and start making authenticated requests.

Fleet Dashboard

Manage devices, recordings, and telemetry across your robot fleet.