View All Jobs 2729

Research Engineer / Scientist, Safety Reasoning

Develop innovative machine learning techniques to enhance foundational model safety understanding.
San Francisco, California, United States
Senior
1 week ago

✨ About The Role

- The role involves developing innovative machine learning techniques to enhance the safety understanding and capabilities of foundational models. - Responsibilities include conducting applied research to improve models' reasoning about human values, morals, ethics, and cultural norms. - The candidate will work on refining AI moderation models to detect and mitigate patterns of AI misuse and abuse. - Collaboration with policy researchers to adapt and iterate on content policies is a key aspect of the job. - The position requires experimentation with various research techniques, including reasoning, architecture, data, and multimodal approaches.

âš¡ Requirements

- The ideal candidate will have over 5 years of research engineering experience, demonstrating a strong proficiency in Python or similar programming languages. - A deep understanding of AI safety topics such as reinforcement learning from human feedback (RLHF), adversarial training, and robustness is crucial for success in this role. - Candidates should be excited about OpenAI's mission to build safe and beneficial AGI, aligning with the company's charter and values. - Experience working with large-scale AI systems and multimodal datasets will be advantageous for this position. - A strong enthusiasm for enhancing the safety of cutting-edge AI models for real-world applications is essential.
+ Show Original Job Post
























Research Engineer / Scientist, Safety Reasoning
San Francisco, California, United States
Engineering
About OpenAI
Building artificial general intelligence