✨ About The Role
- Design and implement systems to detect and prevent abuse, promote user safety, and reduce risk across the platform
- Work closely with engineers and researchers to utilize both industry standard and novel AI techniques to combat abuse and toxic content
- Assist in responding to active incidents on the platform and develop new tooling and infrastructure to address underlying issues
- Architect, build, and maintain anti-abuse and content moderation infrastructure to protect the platform and end users
- Contribute to OpenAI's mission of safely deploying broadly beneficial Artificial General Intelligence (AGI) through responsible AI usage
âš¡ Requirements
- Experienced software engineer with at least 5-8 years in the industry, particularly in building and maintaining anti-abuse and content moderation infrastructure
- Proficient in setting up and maintaining production backend services and data pipelines
- Collaborative team player with a humble attitude, eager to help colleagues and ensure team success
- Self-directed problem solver who takes ownership of tasks and is willing to learn new skills to accomplish goals
- Strong advocate for AI safety in production environments and skilled in building software systems to defend against abuse