View All Jobs 150580

Engineering Manager, Embedded Systems Optimization, AWS Neuron, Annapurna Labs

Develop high-performance kernels to accelerate deep learning workloads on AWS ML accelerators
Cupertino, California, United States
Senior
12 hours agoBe an early applicant
Arkansas Staffing

Arkansas Staffing

Arkansas Staffing appears to be a government-associated entity focused on workforce development and employment services within the state of Arkansas.

746 Similar Jobs at Arkansas Staffing

Machine Learning Kernel Engineer

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team is at the forefront of maximizing performance for AWS's custom ML accelerators. Working at the hardware-software boundary, our engineers craft high-performance kernels for ML functions, ensuring every FLOP counts in delivering optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration.

The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch, enabling unparalleled ML inference and training performance. As part of the broader Neuron Compiler organization, our team works across multiple technology layers - from frameworks and compilers to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance.

This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology. This is an opportunity to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a brilliant team of experienced engineers. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.

Key job responsibilities include:

  • Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models
  • Analyze and optimize kernel-level performance across multiple generations of Neuron hardware
  • Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks
  • Implement compiler optimizations such as fusion, sharding, tiling, and scheduling
  • Work directly with customers to enable and optimize their ML models on AWS accelerators
  • Collaborate across teams to develop innovative kernel optimization techniques

About the team:

  • Why AWS: Amazon Web Services is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating.
  • Inclusive Team Culture: Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion.
  • Work/Life Balance: Our team puts a high value on work-life balance.
  • Mentorship & Career Growth: Our team is dedicated to supporting new members.
  • Diverse Experiences: AWS values diverse experiences.

Basic Qualifications:

  • 3+ years of engineering team management experience
  • 7+ years of working directly within engineering teams experience
  • 3+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience
  • 8+ years of leading the definition and development of multi tier web services experience
  • Knowledge of engineering practices and patterns for the full software/hardware/networks development life cycle, including coding standards, code reviews, source control management, build processes, testing, certification, and livesite operations
  • Experience partnering with product or program management teams

Preferred Qualifications:

  • Experience in communicating with users, other technical teams, and senior leadership to collect requirements, describe software product features, technical designs, and product strategy
  • Experience in recruiting, hiring, mentoring/coaching and managing teams of Software Engineers to improve their skills, and make them more effective

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $166,400/year in our lowest geographic market up to $287,700/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.

+ Show Original Job Post
























Engineering Manager, Embedded Systems Optimization, AWS Neuron, Annapurna Labs
Cupertino, California, United States
Engineering
About Arkansas Staffing
Arkansas Staffing appears to be a government-associated entity focused on workforce development and employment services within the state of Arkansas.