Senior Software Engineer - AI Inference

NVIDIA

NVIDIA

Software Engineering, Data Science

United States · Canada · Texas, USA · New York, USA · Santa Clara, CA, USA · Remote

Posted on Apr 15, 2026

NVIDIA is the platform upon which every new AI‑powered application is built. We are seeking a Senior Software Engineer – AI Inference to advance open‑source LLM serving by contributing directly to upstream inference engines like vLLM and SGLang-ensuring they run best‑in‑class on NVIDIA GPUs and systems-and by improving the underlying stack that enables high‑throughput, low‑latency inference at scale.

This is a hands-on role for an engineer who enjoys digging into performance bottlenecks, designing pragmatic runtime improvements, and shipping high‑quality changes that are broadly useful to the community and production deployments.

What you'll be doing:

  • Contribute features, fixes, and optimizations upstream to vLLM/SGLang: author PRs, participate in reviews, write benchmarks/tests, and help drive designs to completion.

  • Implement and optimize inference‑runtime capabilities: batching and scheduling policies, streaming, request lifecycle management, and KV‑cache efficiency (paging/sharding) to improve throughput and tail latency.

  • Profile and improve hot paths across layers-from Python orchestration to C++/CUDA kernels-using data to guide optimization work.

  • Improve multi‑GPU inference performance and reliability: parallelism strategies, communication patterns, and resource utilization across NVIDIA platforms.

  • Build and maintain performance and correctness regression tests to prevent slowdowns and ensure stable behavior across model and hardware configurations.

  • Collaborate with model, platform, and SRE teams to translate production requirements into upstreamable solutions with strong operability and maintainability.

What we need to see:

  • 5+ years building production software with solid systems engineering fundamentals and a track record of delivering performance or reliability improvements.

  • Experience with LLM inference/serving stacks (e.g., vLLM, SGLang) and an understanding of the tradeoffs that drive real production performance.

  • Strong programming skills in Python plus C++ and/or CUDA; ability to debug and optimize performance‑critical code.

  • Experience with profiling and performance investigation (microbenchmarks, flame graphs, GPU profiling) and a measurement‑driven mindset.

  • Familiarity with distributed systems concepts and concurrency (queues/schedulers, multi‑process/multi‑threading, scaling across GPUs/nodes).

  • Strong communication skills and comfort working with open‑source communities (issues, PR discussions, code review).

  • BS/MS in Computer Science, Computer Engineering, or related field (or equivalent experience).

Ways to stand out from the crowd:

  • Open‑source contributions to vLLM, SGLang, PyTorch, Triton, NCCL, Dynamo or adjacent serving/runtime projects.

  • Shipped performance work such as improved attention/KV cache efficiency, speculative decoding, scheduler improvements, quantization-aware serving, or streaming latency reductions.

  • Experience building reproducible benchmarking and performance regression infrastructure for latency/throughput.

  • Systems performance background spanning memory bandwidth, kernel fusion, PCIe/NVLink effects, and network fabrics (e.g., InfiniBand).

We are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward‑thinking and creative people in the world working for us. If you're creative and autonomous with a real passion for technology, we want to hear from you.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until April 18, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.