View All Jobs 1580

Research Engineer, Safety Reasoning

Design and build a robust pipeline for data management, model training, and deployment.
San Francisco Bay Area
Senior
$245,000 - 440,000 USD / year
4 months ago

✨ About The Role

- Conduct applied research to improve foundational models' ability to reason about human values, morals, ethics, and cultural norms - Develop and refine AI moderation models to detect and mitigate known and emerging patterns of AI misuse and abuse - Work with policy researchers to adapt and iterate on content policies to prevent harmful behavior effectively - Contribute to research on multimodal content analysis to enhance moderation capabilities - Experiment and design an effective red-teaming pipeline to examine the robustness of harm prevention systems and identify areas for improvement

âš¡ Requirements

- Experienced research engineer with a minimum of 5 years in the field, proficient in Python or similar languages - Thrives in environments working with large-scale AI systems and multimodal datasets - Proficient in AI safety topics such as RLHF, adversarial training, robustness, fairness & biases - Enthusiastic about AI safety and dedicated to enhancing the safety of cutting-edge AI models for real-world use - Aligned with OpenAI's mission of building safe, universally beneficial AGI
+ Show Original Job Post
























Research Engineer, Safety Reasoning
San Francisco Bay Area
$245,000 - 440,000 USD / year
Engineering
About OpenAI
Building artificial general intelligence