📖

Documentation

Complete API reference, SDK guides, and platform documentation for developers integrating AnnotRift into their ML pipelines.

View docs →
📋

Guides & Tutorials

Step-by-step guides for setting up annotation projects, designing ontologies, and optimizing quality workflows.

Browse guides →

Case Studies

Real-world examples of how enterprise AI teams use AnnotRift to accelerate their development pipelines.

Read case studies →
Popular Guides

Get started quickly

Getting Started

Your First Annotation Project

A complete walkthrough of creating your first labeling project — from data upload to quality review and export.

10 min read • Updated Jan 2026
RLHF

Designing Effective Preference Tasks

Best practices for creating preference ranking tasks that produce high-quality RLHF data for alignment training.

15 min read • Updated Feb 2026
Quality

Building Custom Quality Pipelines

How to configure multi-stage quality assurance workflows with consensus scoring, spot checks, and automated validation.

12 min read • Updated Jan 2026
API

Python SDK Quick Start

Install the AnnotRift Python SDK and programmatically create projects, upload data, and retrieve annotations.

8 min read • Updated Mar 2026
Evaluation

Custom Model Evaluation Frameworks

Design and deploy custom evaluation rubrics with expert human judges to measure real-world model performance.

14 min read • Updated Feb 2026
Integration

Connecting to Your ML Pipeline

Integrate AnnotRift with AWS SageMaker, Google Vertex AI, Azure ML, Databricks, and other ML platforms.

11 min read • Updated Jan 2026
Case Studies

How teams use AnnotRift

🏥

How MedTech AI Built a Clinical NLP Dataset in 3 Weeks

200K validated medical QA pairs generated through our synthetic data pipeline with 96% expert-verified accuracy.

Healthcare • 5 min read
🚗

AutoDrive Systems: 99.6% Accuracy on 3D Point Cloud Annotation

How a leading AV company achieved production-grade LiDAR annotation quality at 10x their previous throughput.

Autonomous Vehicles • 7 min read
🤖

Frontier Lab Improves Safety Metrics 34% with RLHF Data

A top AI research lab used AnnotRift's expert annotators to generate alignment data that measurably improved model safety.

AI Safety • 6 min read
Developer Resources

Build with our API

Programmatic access to all platform capabilities. Create projects, upload data, manage workflows, and retrieve annotations through our RESTful API and Python SDK.

  • RESTful API with comprehensive OpenAPI documentation
  • Python SDK with type hints and async support
  • Webhook notifications for real-time status updates
  • Batch operations for high-volume workflows
  • Sandbox environment for testing and development
View API docs
# Create a labeling project
import annotrift
 
client = annotrift.Client(api_key="ar_...")
 
project = client.projects.create(
  name="Object Detection v2",
  type="bounding_box",
  quality="premium",
)
 
# Upload data and start
project.upload(source="s3://...")
project.start()
Webinars & Events

Learn from the experts

Upcoming

Scaling RLHF: Lessons from 100M Preference Pairs

Join our research team for a deep dive into scaling preference data collection while maintaining quality.

April 15, 2026 • 11:00 AM PT
On-demand

Building Custom Evaluation Frameworks

A practical workshop on designing evaluation rubrics that capture the capabilities that matter most.

Recorded • 45 min
On-demand

From Raw Data to Production: The Complete Pipeline

End-to-end walkthrough of building a production annotation pipeline with quality controls and automation.

Recorded • 60 min

Ready to get started?

Create your account and launch your first project in minutes.

Start your project