Applied Research Engineer - Multimodal LLMs
Apple
This job is no longer accepting applications
See open jobs at Apple.See open jobs similar to "Applied Research Engineer - Multimodal LLMs" AnitaB.org.Sunnyvale, CA, USA
USD 143,100-264,200 / year + Equity
Posted on Feb 7, 2025
Summary
Posted:
Role Number:200590088
Are you excited with the amazing potential of foundation models, LLMs and multimodal LLMs? We are looking for individuals who thrive on collaboration with desire to push the boundaries of what is possible today! The Video Computer Vision org is a centralized applied research and engineering organization responsible for developing real-time on-device Computer Vision and Machine Perception technologies across Apple products. We balance research and product to deliver Apple quality, state-of-the-art experiences, innovating through the full stack, and partnering with HW, SW and ML teams to influence the sensor and silicon roadmap that brings our vision to life.
Description
We are seeking a highly motivated and skilled senior Applied Research Engineer to join our team. The ideal candidate will have a strong background in developing and exploring foundation models and multimodal large language models that integrate various types of data such as text, image, video, and audio. You will work on cutting-edge research projects to advance our AI and computer vision capabilities, contributing to both foundational research and practical applications. Responsibilities include but not limited to: Conduct research and development on multimodal large language models, focusing on exploring and utilizing diverse data modalities Design, implement, and evaluate algorithms and models to enhance the performance and capabilities of our AI systems Collaborate with cross-functional teams, including researchers, data scientists, software engineers, to translate research into practical applications Stay up-to-date with the latest advancements in AI, machine learning, and computer vision, and apply this knowledge to drive innovation within the company.
Minimum Qualifications
- Experience in developing, training/tuning foundation models and multimodal LLMs
- Programming skills in Python
- Bachelors Degree and a minimum of 3 years relevant industry experience
Key Qualifications
Preferred Qualifications
- PhD in Computer Science, Electrical Engineering, or a related field with a focus on AI, machine learning, or computer vision.
- Expertise in one or more of: computer vision, NLP, multimodal fusion, Generative AI.
- Experience with at least one deep learning framework such as PyTorch, JAX or similar.
- Publication record in relevant venues.
- Experience in leading ML initiatives and proven record of shipping products
Education & Experience
Additional Requirements
Pay & Benefits
Apple is an equal opportunity employer that is committed to inclusion and diversity. We take affirmative action to ensure equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.
This job is no longer accepting applications
See open jobs at Apple.See open jobs similar to "Applied Research Engineer - Multimodal LLMs" AnitaB.org.