Senior Hardware Modeling Simulation SDE, AWS Machine Learning Accelerators
Amazon
DESCRIPTION
In-house designed SoCs (System on Chips) are the brains and brawn behind AWS’s Machine Learning Acceleration servers, TRN and INF. Our team builds functional models of these ML accelerator chips to speed up SoC verification and system software development. We’re looking for a Hardware Modeling Simulation SDE (Software Development Engineer) to join the team and deliver new C++ models, infrastructure, and tooling for our customers.
As part of the ML acceleration modeling team, you will:
- Develop and own SoC functional models end-to-end, including model architecture, integration with other model or infrastructure components, testing, and debug
- Work closely with architecture, RTL design, design verification, emulation, and software teams to build, debug, and deploy your models
- Innovate on the tooling you provide to customers, making it easier for them to use our SoC models
- Drive model and modeling infrastructure performance improvements to help our models scale
- Develop software which can be maintained, improved upon, documented, tested, and reused
Annapurna Labs, our organization within AWS, designs and deploys some of the largest custom silicon in the world, with many subsystems that must all be modeled and tested with high quality. Our SoC model is a critical piece of software used in both our SoC development process and by our partner software teams. You’ll collaborate with many internal customers who depend on your models to be effective themselves, and you'll work closely with these teams to push the boundaries of how we're using modeling to build successful products.
You will thrive in this role if you:
- Are an expert in functional modeling for SoCs, ASICs, TPUs, GPUs, or CPUs
- Are comfortable modeling in C++ with OOP principles
- Enjoy learning new technologies, building software at scale, moving fast, and working closely with colleagues as part of a small team within a large organization
- Want to jump into an ML-aligned role, or get deeper into the details of ML at the hardware/system-level
Although we are building ML SoC models, no machine learning background is needed for this role. You’ll be able to ramp up on ML as part of this role, and any ML knowledge that’s required can be learned on-the-job.
This role can be based in either Cupertino, CA or Austin, TX. The broader team is split between the two sites, with a slight preference for CA, due to colocation with more customer teams.
We're changing an industry. We're searching for individuals who are ready for this challenge, who want to reach beyond what is possible today. Come join us and build the future of machine learning!