AWS Neuron is the software stack powering AWS Inferentia and Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure to serve modern machine learning models—including large language models (LLMs) and multimodal workloads—reliably and efficiently on AWS silicon. We are seeking a software development engineer to lead and architect our next-generation model serving infrastructure, with a particular focus on large-scale generative AI applications.
Key job responsibilities:
A day in the life:
About the team:
The Neuron Serving team is at the forefront of scalable and resilient AI infrastructure at AWS. We focus on developing model-agnostic inference innovations, including disaggregated serving, distributed KV cache management, CPU offloading, and container-native solutions. Our team is dedicated to upstreaming Neuron SDK contributions to the open-source community, enhancing performance and scalability for AI workloads. We're committed to pushing the boundaries of what's possible in large-scale ML serving.