Associate Machine Learning Engineer
New York, New York
Kargo creates powerful moments of connection between brands and consumers to build businesses. Every day, our 600+ employees work to radically raise the bar on what agentic AI, CTV, eCommerce, social, and mobile can do to deliver unique ad experiences across the world's most premium platforms. Taking a creative science approach to all we do, we continuously innovate solutions that outperform industry benchmarks and client expectations. Now 20+ years strong, Kargo has offices in NYC, Chicago, LA, Dallas, Sydney, Auckland, London and Waterford, Ireland.
Who We Hire
Techies who want to build the future. Creatives who want to design it better. Communicators to win business. Collaborators to build it. Data pros who turn numbers into insights. Product builders who turn ideas into innovations. Anyone eager to be on a team that doesn't stop to ask what's next, because they're already building it.
Mission
Build and maintain the ML training, inference, and serving pipelines that power Kargo's auction optimization, outcome prediction, and content recommendation systems. This role takes models from data science prototypes into production environments where they directly influence bid pricing, pacing, CTR prediction, and audience targeting at programmatic scale. Success means reliable, observable, and continuously improving ML systems that move revenue metrics.
This is a hybrid role requiring onsite presence 4 days per week.
Outcomes — What Success Looks Like in 6–12 Months
- Production pipelines shipped: Independently own the deployment of at least 2-3 ML models into production (batch or real-time), including training pipelines, inference endpoints, and monitoring — with documented latency, throughput, and accuracy SLAs met.
- Inference reliability improved: Reduce model serving incidents or latency regressions on owned systems by a measurable margin (e.g., p99 latency, error rate, or successful retraining cadence) compared to baseline at start of tenure.
- MLOps infrastructure contributions: Deliver meaningful improvements to shared ML infrastructure — CI/CD for ML workflows, feature pipelines, model registry, or monitoring tooling — that other engineers and data scientists adopt.
- Cross-functional model launches: Partner with Data Science and Product to productionize at least one revenue-impacting model (auction optimization, CTR/viewability prediction, or recommendation) with clear before/after business metrics.
- Operational ownership: Establish monitoring, alerting, and retraining workflows for production models you own, with documented runbooks and minimal escalations to senior engineers for routine issues.
Skills — Core Technical Capabilities
Required:
- 2-4 years building and deploying ML systems in production; strong Python (C++ a plus)
- Hands-on experience with ML frameworks (PyTorch, TensorFlow, or similar) across the full lifecycle: training, evaluation, deployment, monitoring
- Working knowledge of AWS (SageMaker, EC2, Lambda) or comparable cloud ML services
- Proficiency with large-scale data tooling — SQL, Spark, and Snowflake or equivalent warehouse
- Experience implementing CI/CD pipelines for ML workflows and model serving infrastructure (logging, versioning, lifecycle management)
- Solid grasp of feature engineering, data processing, and debugging production model performance issues
Preferred:
- Experience with model serving frameworks (Triton, FastAPI), containerization (Docker, Kubernetes), or IaC tooling
- Exposure to AdTech, real-time bidding, LLMs/embeddings, or other high-scale distributed systems
Competencies — Behaviors We Like to See
Production Mindset
- Thinks beyond model accuracy to latency, cost, reliability, and observability from day one
- Writes code and pipelines that other engineers can run, debug, and extend without hand-holding
Cross-Functional Translator
- Bridges data science intent and engineering reality — can push back on a model design that won't scale, and explain why in terms a PM understands
- Comfortable pairing with data scientists to harden prototypes and with platform engineers to integrate into the broader stack
Bias for Shipping
- Defaults to small, instrumented launches over large, unmonitored ones; iterates on production signal
- Unblocks themselves on ambiguous problems and asks for help with a clear hypothesis, not a blank slate
Operational Ownership
- Treats models as living systems — sets up monitoring, owns the on-call when something drifts, and follows through on retraining
- Documents what they build so the next person (or the next quarter's Lanissa-era hire) can pick it up cleanly
U.S Salary Range $90,000 - $110,000 USD
Our Laurels
- AdAge Best Places to Work
- ThinkLA Partner of the Year
- Built In Best Places to Work
- Cynopsis 2025 Top Women in Media - Jeannine Shao Collins
- Martech Breakthrough Awards - Best Overall Adtech Company
- Digiday Media Awards Best Event
- Cynopsis Media Impact Awards-Best CTV Platform
- Martech Breakthrough Awards-CTV Innovation
- Adweek Media Plan of the Year Awards - Best Use of Insights
Kargo is an Equal Opportunity Employer. We are committed to building an inclusive and diverse workplace where all employees and applicants are treated with respect and dignity. We do not discriminate on the basis of race, color, ethnic origin, religion or belief, sex, sexual orientation, gender identity or expression, age, disability, marital or family status, national origin, veteran status, or any other characteristic protected by applicable local, state, or federal law. All qualified applicants will receive consideration for employment.
Pursuant to applicable fair chance laws, including the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Kargo will consider qualified applicants with arrest and conviction records for employment.