 
                                                
                                            As an AI Research Engineer at Pixalate, you'll bridge the gap between fundamental AI research and production systems that protect the digital ecosystem. Working with our Research team (AFAC) that has uncovered more than $100M in ad fraud and national security threats, you'll have the autonomy to pursue groundbreaking research while seeing your innovations deployed at scale within months, not years. You'll lead research in emerging AI paradigms including autonomous agent systems, test-time compute optimization, and multimodal understanding - all applied to real-world challenges in digital safety and fraud detection.
We're particularly interested in candidates who can demonstrate both theoretical depth and practical implementation skills. Show us how your research can transform the landscape of online trust and safety. Pixalate is an online trust and safety platform that protects businesses, consumers and children from deceptive, fraudulent and non-compliant mobile, CTV apps and websites. We're seeking a PhD-level AI Engineer to lead cutting-edge research in agentic AI systems, multimodal analysis, and advanced reasoning architectures that will directly impact millions of users worldwide. Our software and data have been used to unearth multiple high profile criminal and illegal surveillance cases including:
Our team of lawyers, data scientists, engineers, economists and researchers span globally with presence in California, New York, Washington DC, London and Singapore.
Pixalate is an equal opportunity employer committed to building a diverse team. We particularly encourage applications from underrepresented groups in AI research.