Software Engineer - AI/ML, AWS Neuron Apps
Amazon
Description
Shape the Future of AI Accelerators at AWS Neuron
Join the elite team behind AWS Neuron—the software stack powering AWS's next-generation AI accelerators Inferentia and Trainium. As a Senior Software Engineer in our Machine Learning Applications team, you'll be at the forefront of deploying and optimizing some of the world's most sophisticated AI models at unprecedented scale.
What You'll Impact:
• Pioneer distributed inference solutions for industry-leading LLMs such as GPT, Llama, Qwen
• Optimize breakthrough language and vision generative AI models
• Collaborate directly with silicon architects and compiler teams to push the boundaries of AI acceleration
• Drive performance benchmarking and tuning that directly impacts millions of inference calls globally
Key job responsibilities
You will drive the Evolution of Distributed AI at AWS Neuron
You'll develop the bridge between ML frameworks including PyTorch, JAX and AI hardware. This isn't just about just optimization—it's about revolutionizing how AI models run at scale.
Technical Impact You'll Drive:
• Spearhead distributed inference architecture for PyTorch and JAX using XLA
• Engineer breakthrough performance optimizations for AWS Trainium and Inferentia
• Develop ML tools to enhance LLM accuracy and efficiency
• Transform complex tensor operations into highly optimized hardware implementations
• Pioneer benchmarking methodologies that shape next-gen AI accelerator design
What Makes This Role Unique:
• Direct influence on AWS's AI infrastructure used by thousands of ML applications
• Full-stack optimization from high-level frameworks to hardware-specific primitives
• Creation of tools and frameworks that define industry standards for ML deployment
• Collaboration with both open-source ML communities and hardware architecture teams
Your Technical Arsenal Should Include:
• Deep expertise in Python and ML framework internals
• Strong understanding of distributed systems and ML optimization
• Passion for performance tuning and system architecture
A day in the life
Work/Life Balance
Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
About the team
At AWS Neuron, we're revolutionizing how the world's most sophisticated AI models run at scale through Amazon's next-generation AI accelerators. Operating at the unique intersection of ML frameworks and custom silicon, our team drives innovation from silicon architecture to production software deployment.
We pioneer distributed inference solutions for PyTorch and JAX using XLA, optimize industry-leading LLMs like GPT and Llama, and collaborate directly with silicon architects to influence the future of AI hardware. Our systems handle millions of inference calls daily, while our optimizations directly impact thousands of AWS customers running critical AI workloads.
We're focused on pushing the boundaries of large language model optimization, distributed inference architecture, and hardware-specific performance tuning. Our deep technical experts transform complex ML challenges into elegant, scalable solutions that define how AI workloads run in production.