Imagine being at the forefront of an evolution where innovative AI meets the elegance of Apple silicon. The On-Device Machine Learning team transforms groundbreaking research into practical applications, enabling billions of Apple devices to run powerful AI models locally, privately, and efficiently. We stand at the unique intersection of research, software engineering, hardware engineering, and product development, making Apple a top destination for machine learning innovation. This team builds the essential infrastructure that enables machine learning at scale on Apple devices. This involves onboarding modern architectures to embedded systems, developing optimization toolkits for model compression and acceleration, building ML compilers and runtimes for efficient execution, and creating comprehensive benchmarking and debugging toolchains. This infrastructure forms the backbone of Apple's machine learning workflows across Camera, Siri, Health, Vision, and other core experiences, supplying to the overall Apple Intelligence ecosystem. If you are passionate about the technical challenges of running sophisticated ML models across all devices, from resource-constrained devices to powerful clusters, and eager to directly impact how machine learning operates across the Apple ecosystem, this role presents a great opportunity to work on the next generation of intelligent experiences on Apple platforms. Our group is seeking an ML Infrastructure Engineer, with a focus on ML user experience APIs and integration. The role is responsible for developing new ML model conversion and authoring APIs that serve as the main entry point into Apple's ML infrastructure. An engineer in this role will also drive the onboarding of popular and latest ML models—demonstrating end-to-end workflows that highlight both the authoring and runtime capabilities of Apple's ML ecosystem with strong, competitive performance on Apple platforms. The role also involves integrating these APIs into internal and external systems (e.g., Hugging Face) to showcase the most efficient path for bringing models into Apple's ML stack. This integration could involve a gamut of optimizations ranging from authored program optimizations (e.g., in PyTorch) to custom transformations within Apple's model representation.
As an engineer in this role, you will be primarily focused on developing and using APIs that enable ML engineers to efficiently author and convert ML models to run effectively on Apple platforms. You will integrate Apple's ML tools into internal and external model repositories to evaluate and demonstrate how models can be efficiently ingested and implemented within Apple's ML stack. You will ideate, design, and stress test a variety of optimizations required to support these models, ranging from source-level optimizations (e.g., in the PyTorch program) to custom transformations within Apple's model representation. As a power user of Apple's ML infrastructure, you will also help create the latest and most capable models with strong, driven performance across hardware targets—showcasing the practical power of Apple's authoring and runtime APIs. This role offers the opportunity to shape how ML developers experience Apple's end-to-end inference stack, from model creation to deployment. The role requires a confirmed understanding of ML modeling (architectures, training vs. inference trade-offs, etc.), ML deployment optimizations (e.g., quantization), and strong experience designing Python APIs. We are building the first end-to-end developer experience for ML development that, by taking advantage of Apple's vertical integration, allows developers to iterate on model authoring, optimization, transformation, execution, debugging, profiling and analysis.
Develop APIs in Apple's ML stack for ML engineers to efficiently import and implement their models. Integrate Apple's ML tools into internal and external model repositories to demonstrate and stress-test model ingestion with peak efficiency and performance. Develop optimizations across the pipeline, including source-level transformations, and custom operations to improve inference efficiency. Onboard the latest ML models with peak performance, and use these examples to highlight and validate the authoring and runtime capabilities of Apple's inference stack.
Bachelors in Computer Sciences, Engineering, or related subject area. Highly proficient in Python programming, familiarity with C++ is required. Proficiency in at least one ML authoring framework, such as PyTorch, MLX, and JAX. Strong understanding of ML fundamentals, including common architectures such as Transformers. Hands-on experience with ML inference optimizations, such as quantization, pruning, KV caching, etc. Strong communication skills, including ability to connect with multi-functional audiences.
Experience with C++, Swift, and/or GPU programming paradigms. Familiarity with QAT and other compression and quantization techniques employing PyTorch workflows. Experience designing Python APIs and deploying production-grade Python packages. Experience with MLIR/LLVM or similar compiler toolchains. Familiarity with Hugging Face or other model repositories.
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $147,400 and $272,100, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.
Apple accepts applications to this posting on an ongoing basis.