The Frontier Evals team builds north star model evaluations to drive progress towards safe AGI/ASI. This team builds ambitious evaluations to measure and steer our models, and creates self-improvement loops to steer our training, safety, and launch decisions. Some of the team's open-sourced evaluations include SWE-bench Verified, MLE-bench, PaperBench, and SWE-Lancer, and the team built and ran frontier evaluations for GPT4o, o1, o3, GPT 4.5, ChatGPT Agent, and GPT5. If you are interested in feeling firsthand the fast progress of our models, and steering them towards good, this is the team for you.
We are seeking exceptional research engineers that can push the boundaries of our frontier models in the finance domain. We are looking for those who will help shape AI evaluations of financial reasoning and related capabilities, and will own individual threads within this endeavor end-to-end. In this role, you'll:
We expect you to be:
It would be great if you also have:
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act.