Inference.net is seeking a Machine Learning Engineer to join our team, focusing on optimizing the performance of our cutting-edge AI inference systems. This role involves working with state-of-the-art large language models and ensuring they run efficiently and effectively at scale. You will be responsible for deploying state-of-the-art models at scale and performing optimizations to increase throughput and enable new features. This position offers the chance to collaborate closely with our engineering team and make significant contributions to open source projects, like SGLang and vLLM.
Design and implement optimization techniques to increase model throughput and reduce latency across our suite of models
Deploy and maintain large language models at scale in production environments
Deploy new models as they are released by frontier labs
Implement techniques like quantization, speculative decoding, and KV cache reuse
Contribute regularly to open source projects such as SGLang and vLLM
Deep dive into underlying codebases of TensorRT, PyTorch, TensorRT-LLM, vLLM, SGLang, CUDA, and other libraries to debug ML performance issues
Collaborate with the engineering team to bring new features and capabilities to our inference platform
Develop robust and scalable infrastructure for AI model serving
Create and maintain technical documentation for inference systems
3+ years of experience writing high-performance, production-quality code
Strong proficiency with Python and deep learning frameworks, particularly PyTorch
Demonstrated experience with LLM inference optimization techniques
Hands-on experience with SGLang and vLLM, with contributions to these projects strongly preferred
Familiarity with Docker and Kubernetes for containerized deployments
Experience with CUDA programming and GPU optimization
Strong understanding of distributed systems and scalability challenges
Proven track record of optimizing AI models for production environments
Familiarity with TensorRT and TensorRT-LLM
Knowledge of vision models and multimodal AI systems
Experience implementing techniques like quantization and speculative decoding
Contributions to open source machine learning projects
Experience with large-scale distributed computing
We offer competitive compensation, equity in a high-growth startup, and comprehensive benefits. The base salary range for this role is $180,000 - $250,000, plus competitive equity and benefits including:
Full healthcare coverage
Quarterly offsites
Flexible PTO
Inference.net is an equal opportunity employer. We welcome applicants from all backgrounds and don't discriminate based on race, color, religion, gender, sexual orientation, national origin, genetics, disability, age, or veteran status.
If you're passionate about building the next generation of high-performance systems that push the boundaries of what's possible with large language models, we want to hear from you!