View All Jobs 2729

Research Engineer, Preparedness

Build and refine evaluations to assess risks of frontier AI models.
San Francisco, California, United States
Mid-Level
1 week ago

✨ About The Role

- The role involves identifying emerging AI safety risks and developing new methodologies to explore their impacts. - The candidate will build and continuously refine evaluations of frontier AI models to assess identified risks. - They will design and implement scalable systems and processes to support these evaluations. - The position includes contributing to the refinement of risk management practices and best practice guidelines for AI safety evaluations. - The job requires close collaboration with various teams to ensure comprehensive AGI preparedness.

âš¡ Requirements

- The ideal candidate is passionate and knowledgeable about both short-term and long-term AI safety risks. - They should possess a robust "red-teaming mindset" and be able to think creatively to address challenges. - Experience in machine learning research engineering and observability is essential for this role. - The candidate should be capable of operating effectively in a dynamic and fast-paced research environment. - Strong project management skills are necessary to scope and deliver projects from start to finish.
+ Show Original Job Post
























Research Engineer, Preparedness
San Francisco, California, United States
Engineering
About OpenAI
Building artificial general intelligence