Avala தளத்தின் அடிப்படைக் கருத்துக்களைப் புரிந்துகொள்ளுங்கள்
This page covers the building blocks of the Avala platform: visualization, annotation, datasets, projects, tasks, organizations, labels, quality control, and sequences. Understanding these concepts will help you design effective data workflows for physical AI.
Avala includes specialized viewers for different data types. The multi-sensor viewer handles MCAP and ROS recordings with synchronized playback across all sensor streams. The 3D point cloud viewer renders LiDAR data with six visualization modes. The Gaussian Splat viewer renders photorealistic 3D scene reconstructions using WebGPU.
Multi-window layouts arrange panels in a configurable grid. The layout composer automatically builds optimized arrangements based on the topics in your data, or you can customize the layout manually by dragging, resizing, and rearranging panels.
All panels in a viewer share a synchronized timeline. Navigate frame-by-frame, scrub to specific timestamps, or play back recordings at configurable speeds. The timeline keeps all sensor streams aligned regardless of their individual capture frequencies.
MCAP recordings contain multiple sensor streams (topics). Each topic carries a specific data type — images, point clouds, IMU readings, GPS coordinates — at its own frequency. Avala synchronizes all streams by timestamp so you can see the full sensor picture at any moment in time.
A dataset is a collection of data items (images, video frames, point clouds, or multi-sensor recordings) that serve as the raw material for visualization and annotation.
A project defines an annotation workflow by connecting one or more datasets to a specific task type, label taxonomy, and quality control configuration.
Sequences are ordered collections of data items that represent temporal or spatial progressions — video frames, LiDAR sweeps, or multi-sensor recordings.
A device represents a physical robot, sensor rig, or compute unit in your fleet. Each device has a unique dev_ prefixed identifier and tracks metadata like type, firmware version, and status (online, offline, maintenance).
Devices produce recordings — MCAP files captured during operation. Recordings are automatically associated with their source device and can be filtered by device, date, status, and tags.
Events are timestamped markers on recordings: errors, state changes, anomalies, and custom annotations. Events appear on the MCAP viewer timeline and can be queried across the fleet.
Recording rules automatically evaluate recordings against conditions and take actions (tag, flag for review, notify) when matches occur. Rules can trigger on thresholds, patterns, frequencies, or data absence.
Alerts notify your team when fleet conditions change. Route alerts to Slack, email, webhooks, or in-app notifications. Alerts follow a lifecycle: open → acknowledged → resolved.