View All Jobs 153263

ML Compiler Engineer , AWS Neuron, Annapurna Labs

Develop compiler passes to automate deep learning model performance optimizations
Toronto
Senior
3 weeks ago
Amazon

Amazon

A global e-commerce giant offering a vast array of products, cloud services, and digital streaming content.

Neuron Compiler Team Opportunity

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. The AWS Machine Learning accelerators (Inferentia/Trainium) offer unparalleled ML inference and training performances. They are enabled through state-of-the-art software stack - the AWS Neuron Software Development Kit (SDK). This SDK comprises an ML compiler, runtime, and application framework, which seamlessly integrate into popular ML frameworks like PyTorch. AWS Neuron, running on Inferentia and Trainium, is trusted and used by leading customers such as Snap, Autodesk, and Amazon Alexa.

Within this ecosystem, the Neuron Compiler team is developing a deep learning compiler stack that takes state of the art LLM, Vision, and multi-modal models created in frameworks such as TensorFlow, PyTorch, and JAX, and makes them run performantly on our accelerators. The team is comprised of some of the brightest minds in the engineering, research, and product communities, focused on the ambitious goal of creating a toolchain that will provide a quantum leap in performance.

The Neuron team is hiring systems and compiler engineers in order to solve our customers toughest problems. Specifically, the performance team in Toronto is focused on analysis and optimization of system-level performance of machine learning models on AWS ML accelerators. The team conducts in-depth profiling and works across multiple layers of the technology stack - from frameworks and compilers to runtime and collectives - to meet and exceed customer requirements while maintaining a competitive edge in the market. As part of the Neuron Compiler organization, the team not only identifies and implements performance optimizations but also works to crystallize these improvements into the compiler, automating optimizations for broader customer benefit.

This is an opportunity to work on products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish research, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.

Key job responsibilities:

  • Analyze and optimize system-level performance of machine learning models across the entire technology stack, from frameworks to runtime
  • Conduct detailed performance analysis and profiling of ML workloads, identifying and resolving bottlenecks in large-scale ML systems
  • Work directly with customers to enable and optimize their ML models on AWS accelerators, understanding their specific requirements and use cases
  • Design and implement compiler optimizations, transforming manual performance improvements into automated compiler passes
  • Collaborate across teams to develop innovative optimization techniques that enhance AWS Neuron SDK's performance capabilities

About the team:

Diverse Experiences

AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.

Why AWS

Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.

Inclusive Team Culture

Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences. Amazon's culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.

Work/Life Balance

Our team puts a high value on work-life balance. It isn't about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.

Mentorship & Career Growth

Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.

+ Show Original Job Post
























ML Compiler Engineer , AWS Neuron, Annapurna Labs
Toronto
Engineering
About Amazon
A global e-commerce giant offering a vast array of products, cloud services, and digital streaming content.