✨ About The Role
- Responsible for pioneering methodologies and implementing systems to reduce risks of AI security and privacy research challenges during model deployment
- Design, implement, and evaluate novel methods to protect AI models and systems from threats such as data extraction and model inversion attacks
- Collaborate with cross-functional teams to integrate privacy-preserving techniques into AI model development
- Lead efforts in researching and implementing solutions to mitigate risks associated with data poisoning, membership inference attacks, and more
- Work closely with teams to establish security and privacy best practices and guidelines for model deployment
âš¡ Requirements
- Ideal candidate holds a Ph.D. or other degree in computer science, AI, machine learning, or a related field
- Requires 3+ years of experience in AI security and privacy research for deep learning models
- Strong understanding of deep learning research and proficient in Python and machine learning frameworks like PyTorch or TensorFlow
- Goal-oriented individual who is comfortable with high-value, detailed work
- Thrives in collaborative work environments and is aligned with OpenAI's mission of building safe AGI