✨ About The Role
- The role involves owning an ML privacy vertical, focusing on data leakage attacks, sensitive PII detection, and membership inference attacks.
- Collaboration with the engineering team is crucial to deliver real-world applications of algorithms for customers.
- The candidate will generate high-quality synthetic training data, train LLMs, and conduct rigorous evaluation and benchmarking.
- The position is part of a fast-paced team of ML Ph.D.'s and builders, emphasizing the importance of safety and privacy in AI development.
- The job offers the opportunity to see a direct impact on end customers within weeks rather than years.
⚡ Requirements
- The ideal candidate will have deep domain knowledge in privacy-preserving machine learning.
- Practical experience in techniques to attack or defend machine learning models in terms of privacy is essential.
- Extensive experience in implementing various types of LLM models and architectures in real-world applications is required.
- The candidate should be comfortable leading end-to-end projects and adapting to new findings in the community.
- A background in previous projects or research related to LLM privacy is preferred.