ML Engineer - Evaluation Analysis, Metric and Data Strategy
Apple
Software Engineering, Data Science
Culver City, CA, USA
USD 139,500-258,100 / year + Equity
Posted on Apr 22, 2026
The Productivity and Machine Learning Evaluation team ensures the quality of AI-powered features across a suite of productivity and creative applications; including Creator Studio, used by hundreds of millions of people. This team serves as the primary evaluation function, and its analysis directly informs decisions about model development, feature launches, and product direction. This role is the analytical core of the team; responsible for making sense of evaluation signals and real-world user behavior. The work involves designing feature-level quality metrics, collaborating with partner teams on data collection strategies, and translating evaluation data into concise, actionable insights that drive decisions. This is an opportunity to define how AI feature quality is measured and to directly shape what gets shipped. As AI features evolve into multi-turn, agentic experiences, this role will define what “quality” means when the unit of evaluation is a conversation, not a single response.
Day-to-day work involves analyzing evaluation results, identifying trends, regressions, and segment-level patterns across multiple AI features. This includes collaborating with partner teams on data collection strategies, ensuring evaluation data is representative of real-world usage, and designing the metrics framework that leadership uses to make decisions on AI features. Typical deliverables include: feature-level quality metrics and dashboards, evaluation analysis reports, data collection requirements, dataset representativeness audits, multi-turn evaluation frameworks and session-level scoring rubrics, and concise metric summaries for decision-makers.
- Define and own the quality metrics framework across AI features and agentic experiences, ensuring each feature has a clear north-star metric and supporting diagnostics
- Analyze evaluation outputs to identify quality trends, regressions, and segment-level patterns across both single-turn and multi-turn interactions, tracking how quality degrades or holds over extended conversations
- Drive the data collection strategy with partner teams
- Ensure evaluation data stays grounded in real-world user behavior
- Audit evaluation data representativeness to verify that datasets reflect actual user distributions
- Assess alignment across different evaluation methods, identifying where they agree, diverge, and why
- Deliver concise, decision-ready metric summaries to leadership, translating detailed analysis into clear quality assessments and recommendations
- Influence model development direction by providing actionable feedback on specific failure patterns and data gaps
- Bachelor’s degree in Statistics, Data Science, Applied Mathematics, Computer Science, or a related quantitative field
- 5+ years of experience in applied science, data science, or evaluation research, with a focus on defining and operationalizing quality metrics
- Experience with statistical analysis methods including significance testing, sampling design, effect size estimation, and experimental design
- Experience working with production user data, understanding its biases and limitations compared to controlled evaluation data, including familiarity with sequential interaction data where context and turn order affect quality assessment
- Ability to design evaluation approaches where the unit of analysis is a session or conversation rather than a single model output
- Track record of independently designing metrics frameworks and driving data-informed decisions across cross-functional teams
- Proficiency in Python (pandas, scipy, scikit-learn) or R for data analysis and visualization
- Experience designing evaluation or quality metrics for AI-powered or ML-driven features in consumer-facing products
- Familiarity with productivity software or creative applications, with an ability to distinguish between technically correct and genuinely useful AI outputs
- Experience partnering with engineering or data teams to define data collection requirements and schemas
- Track record of translating complex analytical findings into concise recommendations for non-technical decision-makers
- Experience evaluating tool-use accuracy, retrieval quality, or function-calling reliability within AI systems
- Experience with evaluation methodology including inter-annotator agreement, evaluation bias detection, and dataset representativeness auditing
- Familiarity with agentic orchestration frameworks (LangChain, LangGraph, CrewAI, AutoGen) and emerging agent interoperability protocols (A2A, MCP), with an understanding of how architectural choices in agent design affect evaluability
- Understanding of ML model development processes, with the ability to specify what evaluation signals are useful for model improvement
- Experience managing evaluation across multiple features or product areas simultaneously, with systematic rather than ad-hoc approaches
- Graduate degree in a relevant quantitative field
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.