LLM Machine Learning Research Engineer

Apple

Apple

Software Engineering, Data Science
Seattle, WA, USA
USD 139,500-258,100 / year + Equity
Posted on Dec 12, 2025
Apple is seeking a Research Engineer to join our Foundation Model Preparation and Algorithm Team. We are looking for all levels of talent to bring innovative AI research into Apple products.
We are looking for strong ML applied scientists and engineers to build groundbreaking AI infrastructure and algorithms. This infrastructure will power the optimization of Apple Foundation models, including on-device and server Apple Intelligence models. My team is directly responsible for general model capability, use-case-oriented post-training, and also the feature delivery for Apple Intelligence. You should be a strong scientist and/or engineer who has a background in building state-of-the-art LLMs. Your work will have a direct impact on billions of Apple clients. You will collaborate with world-class talent in LLM training, on-device and server optimization, ML tools/platforms, datasets, and evaluation. You will develop reliable and scalable pipelines and algorithms, such as:Model optimization pipelines, State-of-the-art optimization algorithms, State-of-the-art post-training techniques.
  • Experience developing, optimizing, or training large language models (LLMs), large foundation models, or generative AI models.
  • Software engineering skills in Python and general-purpose system administration and infrastructure management abilities.
  • History of applied research in the neural network model life cycle, training, or a related application area.
  • Experience with languages like Python, C/C++.
  • Track record of driving scientific investigations and experiments, and overcoming obstacles and uncertainty in a research environment.
  • BS degree and 3+ years of proven experience.
  • Publication record at top AI/ML venues.
  • Experience with LLM LoRA fine-tuning, neural network optimization (e.g., quantization, palettization).
  • Experience with LLM pre-training or post-training.
  • Experience with on-device/server scale deployment.
  • Infrastructure management and debugging experience.
  • Experimental rigor when training/evaluating LLMs for the purpose of benchmarking LLM optimization algorithms.
  • Strong communication and accountability skills; a hard-working, strong work ethic, and collaboration abilities.
  • Ph.D. in a related field.
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $139,500 and $258,100, and your base pay will depend on your skills, qualifications, experience, and location.

Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.

Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.

Apple accepts applications to this posting on an ongoing basis.