What You Do At AMD Changes Everything At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
We are looking for a dynamic, energetic candidate to join our growing team in AI Group. In this role, the individual will be responsible for architecting and defining AI workload models, dataflow, defining block level and system level performance of Neural Processing Unit (NPU), NPU network performance modeling, and performance bottleneck analysis on pre/post silicon platforms. As a member of our dynamic team, you will have the opportunity to shape the future of AI model development.
We are looking for a candidate who possesses strong engineering skills to tackle complex challenges on AI model development work. You should have experience in optimizing and accelerating CNN/Generative AI models. Person needs excellent cross team collaboration skills to succeed in this role. Strong experience in developing ML compiler for efficient network mapping on NPU. Work with cross-functional teams to optimize various parts of the SW stack – AI Compiler, AI frameworks, device drivers, and firmware. Bring up emerging ML models based on CNN, transformers and characterize performance. Work on quantization, sparsity, and architecture search methods to optimize and enhance the performance, efficiency, and accuracy of Generative AI models. Collaborate closely with software engineers, data scientists, and researchers to integrate AI models into software applications and platforms.
Research, design, and implement novel methods for efficient CNN, GEN AI models. Model optimization method design including quantization, sparsity, NAS, etc. Collaborate with other team members and teams. Collaborate with compiler team to develop optimization strategies for the compiler.
Experience with deep learning framework, e.g., Pytorch/ONNX/TensorFlow. Experience on model compression, quantization, and end-to-end inference optimization. Strong coding skills in C/C++, Python required. Experience with any of the following also a plus: LLMs, stable diffusion, NeRF, or text-to-video generation. Solid knowledge of AI and ML concepts and techniques. Practical experience applying these concepts to solve real-world problems in the context of research or work experience. Understanding the performance implications on AI acceleration of different compute, memory, and communication configurations and hardware and software implementation choices. Developing and optimizing code for VLIW processors. Analyzing code for high performance CONV, GEMM and non-linear operators Deep understanding of AI frameworks, preferably ONNX.
Minimum of a BS degree, MS or above preferred.
Location: San Jose, CA (Hybrid)