DataOps Engineer, Info Apps
Apple
Data Science
Cupertino, CA, USA
USD 126,797-220,900 / year + Equity
Posted on Mar 17, 2026
Apple Info Apps' Data Engineering team is seeking a DataOps Engineer to support the reliability and operational excellence of our large-scale data platform. Our team provides data services for 20+ iOS and macOS applications, including News, Stocks, Weather, and Books. You'll work on the data platform that powers products used by millions of Apple customers every day and support data engineers, application engineers, data analysts, and ML engineers across the organization.
In this role, you will support the operation of critical data pipelines and data services, ensuring their reliability, performance, and scalability. You'll develop monitor system health, automation tooling, and learn best practices for operational excellence while working closely with partner teams.
- Support the operation of large-scale data pipelines and backend data services, contributing to their reliability, performance, and efficiency.
- Monitor and troubleshoot data processing systems (Spark, Flink, Trino, Kafka) and data storage systems (Parquet, Iceberg, dataset metastore) to identify and resolve performance issues.
- Build and maintain observability infrastructure components (Prometheus, Grafana, PagerDuty) for data platforms and services, working with development teams.
- Configure, manage, and troubleshoot AWS infrastructure (EKS, S3, RDS, Athena, VPCs, and IAM) using infrastructure-as-code tools (Terraform, CloudFormation) and AWS CLI for deployment automation.
- Support data management, data governance, and access control across data pipelines and services.
- Develop automation scripts and tools using Python to reduce manual operational work and improve efficiency across data platform operations.
- Develop web-based operational tools and dashboards using frameworks like Django and React for monitoring and automation.
- Participate in on-call rotations, responding to incidents and supporting incident response procedures.
- BS in Computer Science, related fields or equivalent experience
- Strong foundation in Python programming for automation and scripting
- Solid understanding and hands-on experience with container basics : Docker, Kubernetes fundamentals
- Solid understanding of AWS infrastructure concepts and internals: compute services (EC2, EKS), storage (S3), databases (RDS), data services (Glue, Athena), networking (VPCs, subnets), and identity management (IAM) etc.
- Familiarity with monitoring and logging tools (Splunk, Prometheus, Grafana)
- Familiarity with distributed data platforms and modern data storage formats (Parquet, Iceberg)
- Web development experience with React and Django frameworks
- Strong troubleshooting and communication skills
- Hands-on ability to apply this AWS knowledge by configuring, managing, and troubleshooting infrastructure using infrastructure-as-code tools (Terraform, CloudFormation) and AWS CLI for deployment automation and disaster recovery.
- Experience with job orchestration tools (Airflow, Argo Workflows)
- Hands-on observability stack experience (PagerDuty, Splunk, Prometheus, Grafana, OpenTelemetry)
- Familiarity with SQL and big data ecosystems (Kafka, data warehouse concepts)
- Experience with Apache Spark job tuning and Flink stream processing
- Hands-on experience with Apache Iceberg or similar open-table formats
- Experience with incident management and postmortem processes
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.