Hero Image

AnitaB.org Talent Network

Connecting women in tech with the best professional opportunities!
0
Companies
0
Jobs

Senior Applied Deep Learning Research Scientist, Efficiency

NVIDIA

NVIDIA

Software Engineering, Data Science
Santa Clara, CA, USA · Seattle, WA, USA
USD 192k-356,500 / year + Equity
Posted on Feb 5, 2026

We are now looking for an Applied Deep Learning Research Scientist, Efficiency!

Join our ADLR – Efficiency team to make deep learning faster and consume less energy! Our team influences the next-generation hardware to make AI more efficient; we work on the Nemotron series of models to make our state-of-the-art deep learning models the most efficient OSS models out there; and we develop new technology, software and algorithms to optimize neural networks for training and deployment. Topics include quantization/sparsity/optimizers/reinforcement learning, efficient architectures and pre-training. Our team is located inside the Nemotron pre-training team, collaborating across the company to make Nvidia GPUs the most efficient AI platform possible. Our work quite literally reaches the entire deep learning world. We are looking for applied researchers that want to develop new technologies for efficiency - and who want to understand the ‘why’ in efficiency, getting to the root-cause of why things do or do not work, and using that knowledge to develop new algorithms, numeric formats and architecture improvements.

What you'll be doing:

  • Research of low-bit number representations and pruning and their effect on neural network inference and training accuracy. This includes requirements by the existing state of art neural networks, as well as co-design of future neural network architectures and optimizers.

  • Innovate with new algorithms to make deep learning more efficient while retaining accuracy, and open-source or publish these algorithms for the world to use.

  • Run large-scale deep learning experiments to prove out ideas and analyze the effects of efficiency improvements.

  • Collaborate across the company with teams making the hardware, software and deep learning architectures.

What we need to see:

  • PhD degree in AI, computer science, computer engineering, math or a related field or equivalent experience in some of the areas listed below can substitute for an advanced degree.

  • 5+ years of relevant industrial research experience.

  • Familiarity with state-of-art neural network architectures, optimizers and LLM training.

  • Experience with modern DL training frameworks and/or inference engines.

  • Fluency in Python, and solid coding/software-engineering practices

  • A proven track-record in publications and/or the ability to run large-scale experiments

  • A strong interest in neural network efficiency

Ways to stand out from the crowd:

  • Experience in quantization, pruning, numerics and efficient architectures.

  • A background in computer architecture

  • Experience with GPU computing, kernels, CUDA programming and/or performance analysis

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 192,000 USD - 304,750 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.