View All Jobs 118356

LLM Kernel & Inference Systems Engineer

Lead end-to-end optimization of LLM inference on AMD GPUs across single-node and multi-node clusters
Shanghai, Shanghai, China
Senior
1 week ago
Advanced Micro Devices

Advanced Micro Devices

Designs high-performance CPUs, GPUs, and adaptive computing solutions for PCs, data centers, gaming, and embedded applications.

446 Similar Jobs at Advanced Micro Devices

Senior Member of Technical Staff

What you do at AMD changes everything. At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

The role: As a Senior Member of Technical Staff, you will be a technical leader in Large Language Model (LLM) inference and kernel optimization for AMD GPUs. You will play a critical role in advancing high-performance LLM serving by optimizing GPU kernels, inference runtimes, and distributed execution strategies across single-node and multi-node systems. This role is deeply focused on LLM inference stacks, including vLLM, SGLang, and internal inference platforms. You will work at the intersection of model architecture, GPU kernels, compiler technology, and distributed systems, collaborating closely with internal GPU library teams and upstream open-source communities to deliver production-grade performance improvements. Your work will directly impact throughput, latency, scalability, and cost efficiency for state-of-the-art LLMs running on AMD GPUs.

The person: You are a senior systems engineer with deep LLM domain knowledge who enjoys working close to the metal while keeping a strong understanding of end-to-end inference systems. You are comfortable reasoning about attention, KV cache, batching, parallelism strategies, and how they map to GPU kernels and hardware characteristics. You thrive in ambiguous problem spaces, can independently define technical direction, and consistently deliver measurable performance gains. You balance strong execution with thoughtful upstream collaboration and maintain a high bar for software quality.

Key Responsibilities

  • Optimize LLM Inference Frameworks
  • LLM-Aware Kernel Development
  • Distributed LLM Inference at Scale
  • Model–System Co-Design
  • Compiler & Runtime Optimization
  • End-to-End Inference Pipeline Optimization
  • Open-Source Leadership
  • Engineering Excellence

Preferred experience: Good LLM knowledge, deep understanding of Large Language Model inference, including attention mechanisms, KV cache behavior, batching strategies, and latency/throughput trade-offs. Hands-on experience with vLLM, SGLang, or similar inference systems (e.g., FasterTransformer), with demonstrated performance tuning. Proven experience optimizing GPU kernels for deep learning workloads, particularly inference-critical paths. Experience designing and tuning large-scale inference systems across multiple GPUs and nodes. Track record of meaningful upstream contributions to ML, LLM, or systems-level open-source projects. Strong proficiency in Python and C++, with deep experience in performance analysis, profiling, and debugging complex systems. Solid foundation in compiler concepts and tooling (LLVM, ROCm, Triton), applied to ML kernel and runtime optimization.

Academic credentials: Master's or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field.

+ Show Original Job Post
























LLM Kernel & Inference Systems Engineer
Shanghai, Shanghai, China
Engineering
About Advanced Micro Devices
Designs high-performance CPUs, GPUs, and adaptive computing solutions for PCs, data centers, gaming, and embedded applications.