✨ About The Role
- The Research Engineer will be responsible for designing and implementing scalable solutions to ensure AI alignment with human intent.
- Responsibilities include developing and evaluating alignment capabilities that are subjective and context-dependent.
- The role involves designing evaluations to reliably measure risks and alignment with human values.
- The candidate will build tools and evaluations to study model robustness in various situations.
- Designing experiments to understand how alignment scales with compute, data, and adversarial resources is a key task.
- The position requires designing new Human-AI interaction paradigms and scalable oversight methods.
- The Research Engineer will also train models to be calibrated on correctness and risk.
- The role is based in San Francisco, CA, with a hybrid work model of three days in the office per week.
âš¡ Requirements
- The ideal candidate will have a PhD or equivalent experience in fields such as computer science, computational science, or cognitive science.
- Strong engineering skills, particularly in designing and optimizing large-scale machine learning systems, are essential for success in this role.
- A deep understanding of alignment algorithms and techniques is crucial for effectively contributing to the team.
- The candidate should be a team player, willing to engage in a variety of tasks that support team objectives.
- Experience in developing data visualization or data collection interfaces using languages like TypeScript or Python is highly valued.
- A passion for working in fast-paced, collaborative, and cutting-edge research environments will help the candidate thrive.
- The successful individual will focus on creating AI models that are trustworthy, safe, and reliable, especially in high-stakes scenarios.