This tutorial covers using AI-powered tools in Avala to speed up your annotation workflow, including SAM segmentation and model-based auto-labeling.Documentation Index
Fetch the complete documentation index at: https://avala.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
What You’ll Learn
- How to use SAM (Segment Anything Model) for click-to-segment annotation
- How to set up auto-labeling with connected models
- How to review and refine AI predictions
What AI-Assisted Annotation Offers
Avala integrates AI models directly into the annotation workflow to reduce manual effort:- SAM Segmentation: Click on an object to instantly generate a precise segmentation mask
- Auto-Labeling: Run connected ML models to pre-annotate entire datasets
- Smart Suggestions: Get label recommendations based on object appearance
SAM Segmentation
What Is SAM?
SAM (Segment Anything Model) is an interactive segmentation model built into Avala. It generates pixel-precise object masks from simple click or box prompts.Using SAM in the Viewer
- Open an image or video frame in the annotation viewer
- Select the SAM tool from the toolbar (or press
S) - Click on the object you want to segment
- SAM generates a mask around the object
- Refine the mask if needed:
- Add region: Click on areas that should be included
- Remove region: Hold
Altand click on areas to exclude - Box prompt: Draw a bounding box around the object for a more targeted prediction
- When satisfied, press
Enteror click Accept to convert the mask into an annotation - Assign a label from the dropdown
SAM Tips
- Click near the center of the object for the best initial prediction
- Use box prompts for objects that are close together or have ambiguous boundaries
- Combine positive and negative clicks to refine edges around complex shapes
- Works best on distinct objects: SAM excels on objects that stand out from their background
- Frame-by-frame: In video mode, run SAM on individual frames and use tracking to propagate
Auto-Labeling with Connected Models
Prerequisites
Before using auto-labeling, you need:- A trained model or a model endpoint connected to your Avala organization
- An inference integration configured in Settings → Integrations → Inference
- A project with labels that match the model’s output classes
Triggering Auto-Label Predictions
- Navigate to your project in Avala
- Go to Sequences or Items
- Select the items you want to auto-label:
- Single item: Click the item, then click Auto-Label in the toolbar
- Batch: Select multiple items with checkboxes, then click Auto-Label
- Full dataset: Use Auto-Label All from the project actions menu
- Choose the model from the dropdown
- Set a confidence threshold (predictions below this threshold are discarded)
- Click Run
Monitoring Auto-Label Progress
- A progress bar shows the status of the auto-labeling job
- Results appear on each item as they complete
- Check the Activity panel for job status and any errors
Reviewing AI Predictions
Regardless of whether predictions come from SAM or auto-labeling, they must be reviewed.Review Workflow
- Open an item that has AI-generated predictions
- Predictions are displayed with a visual indicator (dashed outline or distinct color) to distinguish them from human annotations
- For each prediction:
- Accept: Click the prediction and press
Enteror click Accept to confirm it - Modify: Adjust the position, size, or label before accepting
- Reject: Press
Deleteor click Reject to remove the prediction
- Accept: Click the prediction and press
- Save your reviewed annotations
Bulk Review
For large batches of auto-labeled data:- Go to the project Review tab
- Filter by Source: Auto-Label to see only AI-generated annotations
- Use the review controls to accept or reject predictions per item
- Track review progress in the project dashboard
Quality Checks
After reviewing AI predictions:- Verify label accuracy, especially for ambiguous objects
- Check boundary precision on segmentation masks
- Ensure no objects were missed (false negatives)
- Confirm that no background was incorrectly labeled (false positives)
Tips for Best Results
| Tip | Why It Helps |
|---|---|
| Use high-quality training data for auto-label models | Better model input produces better predictions |
| Set confidence thresholds appropriately | Higher thresholds reduce false positives; lower thresholds reduce missed objects |
| Review a sample first | Check a small batch before auto-labeling the full dataset to gauge model quality |
| Combine SAM with auto-labeling | Use auto-labeling for detection, then SAM to refine boundaries |
| Iterate on your model | Export reviewed annotations and retrain for improved auto-labeling over time |
Next Steps
- Set up an Inference Integration to connect your models
- Learn about Annotation Tools for manual refinement
- Explore Quality Control to manage review workflows