View All Jobs 131061

Sr. Staff AI Engineer, Genai Safety

Shape and implement LinkedIn's generative AI safety standards and systems
Sunnyvale, California, United States
Expert
$191,000 – 315,000 USD / year
yesterday
LinkedIn

LinkedIn

Provides professional networking, recruiting, and career development tools connecting individuals and organizations through an online business-focused social platform.

Sr. Staff AI Engineer, GenAI Safety

At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. This role will be based in Sunnyvale, CA.

The Generative AI (GenAI) Safety team sits at the heart of LinkedIn's Responsible AI & Governance (RAI‑G) organization, with a mission to set the gold standard for AI safety across all AI applications company‑wide. We ensure that every generative AI product is developed and deployed responsibly, ethically, and securely. By combining rigorous governance with cutting‑edge ML research, we identify and mitigate risks such as bias, hallucination, misuse, and privacy leakage.

As both the AI Safety Research team and the central AI safety engineering function, we build safety guardrails, evaluation pipelines, and alignment techniques that enable safe innovation at scale. Our work is foundational to the company's AI strategy and influences standards across the industry. We partner closely with Legal, Compliance, AI Infrastructure, and Product teams to embed safety into every stage of the AI lifecycle.

Responsibilities

  • Drive GenAI Safety Strategy: Serve as the senior technical leader shaping the company's generative AI safety direction. Define the roadmap for safety alignment research, model evaluation, and system‑level protections.
  • Lead AI Safety Research & Innovation: Guide LinkedIn's research agenda in alignment, robustness, and responsible model behaviors. Stay ahead of academic and industry advances, rapidly translating insights into practical, production‑ready solutions.
  • Design Safety‑First Foundations: Provide architectural leadership for scalable safety systems—benchmarking, red‑teaming, content safety, privacy‑preserving training, and real‑time guardrails — ensuring they are reliable, performant, and deeply integrated into AI infrastructure.
  • Deliver High‑Impact Solutions in Ambiguous Spaces: Tackle LinkedIn's toughest ethical, regulatory, and risk‑driven problems. Bring clarity and direction in areas with evolving standards, ensuring the company ships safe GenAI experiences at speed.
  • Liaison With Product Engineering: Partner closely with product engineering teams to stay current on emerging experiments, venture bets, and product innovations, ensuring safety research and tooling anticipate and support the next wave of product development.
  • Cross‑Functional Leadership: Collaborate with Legal, Compliance, Privacy, Infra, and Policy teams to operationalize safety requirements, translate regulatory guidance into technical specifications, and ensure end‑to‑end alignment across disciplines.
  • Technical Mentorship: Mentor and grow a team of ~15 engineers across research, ML, and systems. Elevate engineering rigor, drive high bar execution, and nurture future technical leaders in AI safety.
  • Company‑Wide Impact: Ensure safety techniques, tools, and evaluations are deployed across all GenAI products, safeguarding member trust while enabling safe, scalable innovation.

Basic Qualifications:

  • 2+ years as a Technical Lead, Staff Engineer, Principal Engineer, or equivalent.
  • 5+ years of industry experience in AI or Machine Learning Engineering.
  • BA/BS Degree in Computer Science or related technical discipline or equivalent practical experience

Preferred Qualifications:

  • 10+ years of industry and/or research experience in AI/ML delivering impact at scale.
  • PhD in CS/AI/ML or related field (or equivalent research/industry achievements).
  • Expert understanding of Transformers; hands-on experience training, fine‑tuning, distilling/compressing, and deploying LLMs in production.
  • Track record applying LLMs to recommender systems and language agents.
  • Demonstrated leadership in red‑teaming (manual + automated), safety benchmarking/evaluations, content safety/guardrails, prompt‑injection/jailbreak detection, and abuse/misuse prevention.
  • Experience translating Legal/Compliance requirements (e.g., EU AI Act) into technical controls, including harm taxonomies, model cards, and risk assessments.
  • Proven ability to design safety‑first architectures (evaluation pipelines, moderation services, policy engines, incident response & telemetry) for distributed, real‑time ML systems.
  • Strong understanding of RL (e.g., RLHF/RLAIF, offline/online RL) for language‑based agents, including safety‑aware reward design and feedback loops.
  • Advanced Python and PyTorch; familiarity with TensorFlow.
  • Experience with safety evaluation tooling (e.g., platforms akin to LLUME) and safety datasets/benchmarks.
  • Significant contributions via top‑tier publications (NeurIPS, ICLR, ICML, ACL) and/or impactful open‑source or widely used safety tooling.
  • Proven technical leadership mentoring ~15 engineers, setting direction, and elevating execution quality.
  • Effective liaison with Product Engineering (tracking experiments and venture bets; aligning safety research to upcoming bets) and strong collaboration with Legal, Compliance, AI Infra, and Policy.
  • Good to have: Experience with advanced reasoning/planning (e.g., CoT/ToT, self‑reflection, program synthesis, symbolic/neuro‑symbolic methods, search‑augmented reasoning, verification‑aware decoding).

Suggested Skills:

  • GenAI Safety & Risk: Red‑Teaming, Safety Benchmarking/Evaluation, Content Safety & Guardrails, Jailbreak/Prompt‑Injection Detection, Model Cards & Risk Taxonomies, Incident Response & Monitoring
  • AI Modeling: LLMs, Alignment, Reasoning & Planning
  • Reinforcement Learning (RL): RLHF/RLAIF, Reward Design, Feedback Loops, Adaptive Systems
  • Architecture & Platforms: Real‑Time ML Services, Safety Policy Engines, Evaluation Pipelines
  • Technical Leadership: Mentorship, Cross‑Functional Collaboration, Roadmapping, Research Direction
  • Core Tools: Python, PyTorch, Safety Evaluation Tooling

You will Benefit from our Culture:

We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels.

LinkedIn is committed to fair and equitable compensation practices. The pay range for this role is $191,000 - $315,000. Actual compensation packages are based on a wide array of factors unique to each candidate, including but not limited to skill set, years & depth of experience, certifications and specific office location. This may differ in other locations due to cost of labor considerations. The total compensation package for this position may also include annual performance bonus, stock, benefits and/or other applicable incentive compensation plans. For additional information, visit: https://careers.linkedin.com/benefits.

Equal Opportunity Statement

We seek candidates with a wide range of perspectives and backgrounds and we are proud to be an equal opportunity employer. LinkedIn considers qualified applicants without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.

LinkedIn is committed to offering an inclusive and accessible experience for all job seekers, including individuals with disabilities. Our goal is to foster an inclusive and accessible workplace where everyone has the opportunity to be successful.

If you need a reasonable accommodation to search for a job opening, apply for a position, or participate in the interview process, connect with us at accommodations@linkedin.com and describe the specific accommodation requested for a disability-related limitation.

Reasonable accommodations are modifications or adjustments to the application or hiring process that would enable you to fully participate in that process. Examples of reasonable accommodations include but are not limited to:

  • Documents in alternate formats or read aloud to you
  • Having interviews in an accessible location
  • Being accompanied by a service dog
  • Having a sign language interpreter present for the interview

A request for an accommodation will be responded to within three business days. However, non-disability related requests, such as following up on an application, will not receive a response.

San Francisco Fair Chance Ordinance

Pursuant to the San Francisco Fair Chance Ordinance, LinkedIn will consider for employment qualified applicants with arrest and conviction records.

Pay Transparency Policy Statement

As a federal contractor, LinkedIn follows the Pay Transparency and non-discrimination provisions described at this link: https://lnkd.in/paytransparency.

Global Data Privacy Notice for Job Candidates

+ Show Original Job Post
























Sr. Staff AI Engineer, Genai Safety
Sunnyvale, California, United States
$191,000 – 315,000 USD / year
Engineering
About LinkedIn
Provides professional networking, recruiting, and career development tools connecting individuals and organizations through an online business-focused social platform.