Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy.
At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost (7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models.
We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data.
This role is based in Redwood City, CA. We are in office 4 days a week.
We are looking for a highly technical, customer-obsessed Forward Deployed AI Engineer (Post Sales) to guide customers through deploying, operating, and adopting DatologyAI’s platform in complex on-prem or hybrid environments. You will become the trusted technical advisor for our most strategic customers, partnering closely with Sales, Research, and Engineering to drive successful deployments and long-term customer value. You'll bridge the gap between our core platform capabilities and the unique requirements of each customer's environment.
This role is ideal for someone who thrives in ambiguity, enjoys solving challenging distributed systems problems, and wants to build both deep relationships and scalable solutions within a fast-moving startup.
Lead customers through onboarding, deployment, and production rollout of DatologyAI’s platform while serving as the technical owner for assigned accounts—driving architecture, execution, long-term adoption, and tailored technical success plans.
Partner cross-functionally with Sales, Engineering, and Research to translate use-case requirements into actionable technical strategies, support early trials, relay customer feedback, and help shape roadmap priorities.
Guide customers in designing scalable, secure workflows across compute, storage, networking, and distributed systems, providing ongoing reporting on deployment progress, workload health, usage metrics, and executive-level updates.
Adapt and optimize DatologyAI’s platform across AWS, GCP, Azure, and on-prem Kubernetes environments, handling provider-specific APIs, storage systems, networking configurations, and compute orchestration—including tuning performance for network topology, storage tiering, and resource allocation in each environment.
5+ years of experience in technical roles involving solution architecture, customer engineering, consulting, or technical program delivery.
Strong background in distributed systems, data infrastructure, and/or on-prem or hybrid compute environments.
Experience working with ML/AI workflows, designing or deploying systems involving Kubernetes, networking, data pipelines, or large-scale backend infrastructure.
Proficiency in Python, SQL, or similar languages, with the ability to contribute to technical conversations and debug customer issues end-to-end.
Experience leading complex technical projects with multiple stakeholders—translating business needs into clear architecture and execution plans.
Deep hands-on experience with multiple cloud platforms (AWS, GCP, Azure) including their compute, storage, networking, and IAM services.
Proven track record of adapting complex distributed systems to run across different infrastructure environments.
Expertise in infrastructure-as-code and configuration management for multi-environment deployments.
Required to travel to customer sites as needed to support critical deployments and customer engagements.
At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The salary for this position ranges from $230,000 to $300,000 OTE.
The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance.
We offer a comprehensive benefits package to support our employees' well-being and professional growth:
100% covered health benefits (medical, vision, and dental).
401(k) plan with a generous 4% company match.
Unlimited PTO policy
Annual $2,000 wellness stipend.
Annual $1,000 learning and development stipend.
Daily lunches and snacks are provided in our office!
Relocation assistance for employees moving to the Bay Area.