Case Study: Enhancing You.com’s App Relevance Engine with Human-in-the-Loop Annotation
Relevance Annotation
Service Information
Executive Summary
Our team was engaged to improve the accuracy of app ranking models through expert annotation. We delivered reliable training data that enabled You.com to iterate faster and develop their models with greater confidence.
Client Background
An AI-powered search company, You.com blends large language models with third-party applications to deliver context-aware results beyond traditional search engines.
Challenge
The project required domain-aware annotation of app relevance across diverse queries. This work demanded nuanced human judgment to distinguish truly useful results from tangentially related ones, distinctions automated systems couldn't reliably capture.
Solution
A structured annotation framework was designed for search relevance:
Clear multi-level relevance scale for consistent judgment
Trained annotators across diverse queries and apps
Quality control workflows for reliability
Results formatted for immediate use by ML teams
Implementation
Human-in-the-loop methodology complemented automated pipelines
Guideline calibration and feedback cycles ensured client alignment
Repeatable workflow established for future annotation needs
Results
The annotated datasets improved ranking model reliability and supported ongoing development. ML engineers noted the work accelerated their ability to test and refine outputs.
