Join the elite team behind AWS Neuron—the software stack powering AWS's next-generation AI accelerators Inferentia and Trainium. As a Senior Software Engineer in our Machine Learning Applications team, you'll be at the forefront of deploying and optimizing some of the world's most sophisticated AI models at unprecedented scale.
What You'll Impact:
Key Job Responsibilities
You will drive the Evolution of Distributed AI at AWS Neuron. As a Technical Leader at the forefront of AWS's AI Accelerator, you'll architect the bridge between ML frameworks including PyTorch, JAX and AI hardware. This isn't just about optimization—it's about revolutionizing how AI models run at scale.
Technical Impact You'll Drive:
What Makes This Role Unique:
Your Technical Arsenal Should Include:
A Day In The Life
Work/Life Balance
Our team puts a high value on work-life balance. It isn't about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
About The Team
At AWS Neuron, we're revolutionizing how the world's most sophisticated AI models run at scale through Amazon's next-generation AI accelerators. Operating at the unique intersection of ML frameworks and custom silicon, our team drives innovation from silicon architecture to production software deployment.
We pioneer distributed inference solutions for PyTorch and JAX using XLA, optimize industry-leading LLMs like GPT and Llama, and collaborate directly with silicon architects to influence the future of AI hardware. Our systems handle millions of inference calls daily, while our optimizations directly impact thousands of AWS customers running critical AI workloads.
We're focused on pushing the boundaries of large language model optimization, distributed inference architecture, and hardware-specific performance tuning. Our deep technical experts transform complex ML challenges into elegant, scalable solutions that define how AI workloads run in production.