View All Jobs 112179

Research Engineer, Machine Learning Systems - Remote Eligible

Own scalable training infrastructure for speech models and deploy production-grade ML tooling.
Remote
Senior
$150,000 – 250,000 USD / year
yesterday
Deepgram

Deepgram

Provides AI-powered speech recognition and audio intelligence APIs for real-time transcription, understanding, and analysis at scale.

The Opportunity

Voice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.

The Role

Deepgram is seeking a highly skilled and versatile Machine Learning Engineer to join our Research team. As a Member of the Research Staff, you will partner with research scientists to prototype and validate novel modeling ideas, then scale them through robust training systems for speech technologies, internal tooling, and innovative data strategies. You'll work at the intersection of machine learning, data infrastructure, and internal tooling to support our mission of building world-class speech recognition and synthesis systems. On the Research team, you will experiment with new technologies and techniques, while also working on product-focused deliverables, learning from colleagues with a wide range of expertise in AI and machine learning as you go.

Key Responsibilities

Scalable Model Training: Architect and manage horizontally scalable systems that dramatically accelerate the end-to-end training lifecycle for Speech-to-Text (STT) and Text-to-Speech (TTS) models. This includes far more than automated training: the role focuses on making model development significantly faster and more efficient through optimized data preparation and management, high-throughput training pipelines, distributed infrastructure, and automated evaluation tooling.

Tooling & Accessibility: Design and implement internal UIs and tools that make ML systems and workflows accessible to non-technical stakeholders across the company. These UIs should be designed to provide transparency and flexibility to internally built tooling.

Infrastructure & Tools: Oversee and manage training tooling, job orchestration, experiment tracking, and data storage.

The Challenge

We are seeking Members of the Research Staff who:

See "unsolved" problems as opportunities to pioneer entirely new approaches

Can identify the one critical experiment that will validate or kill an idea in days, not months

Have the vision to scale successful proofs-of-concept 100x

Are obsessed with using AI to automate and amplify your own impact

If you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.

It's Important to Us That You Have

Strong experience with the machine learning research pipeline, particularly in STT or related speech domains. This includes experimenting with and evaluating new architectures and modeling approaches, and implementing large-scale training systems.

Proficiency with orchestration and infrastructure tools like Kubernetes, Docker, and Prefect.

Familiarity with ML lifecycle tools such as MLflow.

Experience building internal tools or dashboards for non-technical users.

Hands-on experience with data engineering practices for unstructured audio and text data.

Comfortable working in cross-functional teams that include researchers, engineers, and product stakeholders.

Why Join Deepgram?

At Deepgram, you'll help shape the future of human–machine communication. Our research culture prioritizes ownership, experimentation, and real-world impact. As a Member of the Research Staff, you'll be empowered to build tools and systems that accelerate ML research and product deployment at scale.

Benefits & Perks*

Holistic health

Medical, dental, vision benefits

Annual wellness stipend

Mental health support

Life, STD, LTD Income Insurance Plans

Work/life blend

Unlimited PTO

Generous paid parental leave

Flexible schedule

12 Paid US company holidays

Quarterly personal productivity stipend

One-time stipend for home office upgrades

401(k) plan with company match

Tax Savings Programs

Continuous learning

Learning / Education stipend

Participation in talks and conferences

Employee Resource Groups

AI enablement workshops / sessions

* For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.

+ Show Original Job Post
























Research Engineer, Machine Learning Systems - Remote Eligible
Remote
$150,000 – 250,000 USD / year
Engineering
About Deepgram
Provides AI-powered speech recognition and audio intelligence APIs for real-time transcription, understanding, and analysis at scale.