Skip to main content
Avala is the platform where robotics, autonomous vehicle, and physical AI teams visualize, explore, and annotate their sensor data — all in one place. Upload MCAP recordings, LiDAR scans, camera feeds, or Gaussian Splat scenes. Play them back in a GPU-accelerated multi-sensor viewer with synchronized timelines and configurable panel layouts. When you are ready to label, switch to annotation mode on the same data — no exports, no tool-switching, no re-uploads. The platform handles the full lifecycle from raw sensor data through labeled training datasets.

Who Uses Avala

  • Autonomous Vehicle Teams — Label camera images, LiDAR point clouds, and synchronized multi-sensor recordings for perception model training. Visualize and debug MCAP/ROS data with multi-camera projection.
  • Robotics Companies — Annotate perception data for navigation, manipulation, and scene understanding. Explore 3D point clouds with GPU-accelerated rendering.
  • Physical AI / Spatial Computing Teams — Work with Gaussian Splat scenes, dense point clouds, and multi-modal sensor data for 3D world understanding and sim-to-real transfer.
  • AI/ML Teams — Create training datasets for object detection, segmentation, classification, and tracking across images, video, and 3D data.
  • Research Labs — Build labeled datasets for computer vision and 3D perception research with professional annotation tools and quality control workflows.

Platform Capabilities

Visualization

Avala’s visualization engine runs entirely in the browser, powered by WebGPU and WebGL.
  • Multi-sensor MCAP/ROS playback — Open MCAP files containing camera, LiDAR, radar, and IMU data. The viewer auto-detects topics and assigns panel types across 8 panel types: Image, 3D / Point Cloud, Plot, Raw Messages, Log, Map, Gauge, and State Transitions.
  • GPU-accelerated 3D point cloud rendering — Render point clouds with 6 visualization modes: Neutral, Intensity, Rainbow, Label, Panoptic, and Image Projection. WebGPU compute shaders handle frustum culling and level-of-detail selection on the GPU.
  • Gaussian Splat viewer — Inspect 3D scene reconstructions in a WebGPU-accelerated Gaussian Splat viewer with scene hierarchy, properties panel, and statistics overlay.
  • Multi-camera synchronized playback — View multiple camera streams in sync with LiDAR-to-camera projection overlays. Supports pinhole and double-sphere (fisheye) camera models.
  • Configurable multi-window layouts — Drag-and-drop panel arrangement with resizable split views. The default layout places a topics sidebar, content panels, and a file info panel in a horizontal root configuration.
  • Timeline-based navigation — Frame stepping, timestamp seeking, and playback speed control across all synchronized sensor streams.

Annotation

Professional annotation tools for every data modality, backed by quality control and team workflows.
  • Bounding Boxes — 2D rectangular regions for object detection
  • Polygons — Arbitrary shapes for precise object boundaries
  • 3D Cuboids — 3D bounding boxes in point cloud and multi-sensor data with bird’s-eye, perspective, and side views
  • Segmentation — Pixel-level classification masks
  • Polylines — Path, lane, and edge annotations
  • Keypoints — Landmark and pose annotations
  • Classification — Scene-level and object-level attribute labels
Quality control includes multi-stage review workflows, annotation issue tracking, inter-annotator agreement metrics, and consensus workflows. Object tracking provides consistent IDs across video and sequence frames. Managed labeling services are available for teams that need professional annotators trained on their domain.

Supported Data Types

Avala handles five data modalities, each with purpose-built visualization and annotation workflows:
Data TypeFormatsDescription
ImagesJPEG, PNG, WebPSingle-frame visualization and annotation with all 2D tools
VideoMP4, MOVConverted to frame sequences for playback, frame-by-frame annotation, and object tracking
Point CloudsPCD, PLY3D LiDAR scans with GPU-accelerated rendering and cuboid annotation
MCAP / ROSMCAPMulti-sensor container with camera, LiDAR, radar, and IMU data; multi-panel playback and multi-camera projection
SplatGaussian Splat3D scene visualization and annotation in WebGPU-rendered Gaussian Splat environments

SDKs

Python SDK

Install with pip install avala — full type hints and async support.

TypeScript SDK

Install with npm install @avala-ai/sdk — works in Node.js and browsers.

Explore the Platform

Visualization

Visualization

GPU-accelerated multi-sensor viewer with 8 panel types, 6 point cloud rendering modes, and Gaussian Splat support.

Annotation

Annotation

Professional annotation tools for 2D, 3D, video, and multi-sensor data with quality control.

Integrations

Integrations

Connect with S3, MCP, MCAP/ROS, webhooks, and inference pipelines.

Next Steps

Quickstart

Create your first annotation project in under 60 seconds.

Core Concepts

Understand datasets, projects, tasks, and the annotation lifecycle.

Visualization

Explore the multi-sensor viewer, 3D point cloud renderer, and Gaussian Splat viewer.

SDKs

Install the Python or TypeScript SDK and start building.