✨ About The Role
- The role involves designing and implementing experiments for collective alignment research at OpenAI
- Responsibilities include writing performant and clean code for ML training, running and analyzing ML experiments, and collaborating with a small team to balance flexibility and stability in research projects
- The position requires understanding the high-level research roadmap to plan and prioritize future experiments
- Tasks include implementing experiments to measure the effectiveness of different preference learning techniques, examining the impact of aggregation methodologies on model behavior, and curating large datasets for investigation
- The role also involves exploring methods to understand and predict model behaviors, as well as designing novel approaches for using LLMs in democratic inputs to AI research
âš¡ Requirements
- Ideal candidate is passionate about OpenAI's mission of building safe, universally beneficial AGI and is aligned with the company's charter
- Strong engineering skills with a desire to push the boundaries of what state-of-the-art language models can achieve
- Curious about sociotechnical challenges related to aligning and understanding ML models, and motivated to address these challenges
- Thrives in fast-paced, collaborative, and cutting-edge research environments
- Experience implementing ML algorithms (e.g., PyTorch) and developing data visualization or data collection interfaces (e.g., JavaScript, Python)