✨ About The Role
- Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks
- Build (and then continuously refine) evaluations of frontier AI models that assess the extent of identified risks
- Design and build scalable systems and processes that can support these kinds of evaluations
- Contribute to the refinement of risk management and the overall development of "best practice" guidelines for AI safety evaluations
⚡ Requirements
- Experienced in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk
- Passionate and knowledgeable about short-term and long-term AI safety risks
- Able to think outside the box and have a robust “red-teaming mindset”
- Able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end
- First-hand experience in red-teaming systems—be it computer systems or otherwise