View All Jobs 157680

AI Security Engineer - Red Team

Develop novel red teaming methodologies for multimodal AI systems and autonomous agents
Senior
22 hours agoBe an early applicant
Lakera

Lakera

A provider of AI-powered risk management solutions designed to enhance cybersecurity and compliance in digital environments.

We're looking for an AI Security Engineer to join our Red Team and help us push the boundaries of AI security. You'll lead cutting-edge security assessments, develop novel testing methodologies, and work directly with enterprise clients to secure their AI systems. This role combines hands-on red teaming, automation development, and client engagement. You'll thrive in this role if you want to be at the forefront of an emerging discipline, enjoy working on nascent problems, and like both breaking things and building processes that scale.

Key Responsibilities

This is a highly cross-functional position. AI security is still being defined, with best practices emerging in real-time. You'll be building the frameworks, methodologies, and tooling that scale our services while staying adaptable to rapid changes in the AI landscape. This role is ideal for someone who wants to take their traditional cybersecurity expertise and apply it to the new frontier of AI security and safety. Your focus will span several key areas:

Service Delivery & Client Engagement

  • Lead end-to-end delivery of AI red teaming security assessment engagements with enterprise customers

  • Collaborate with clients to scope projects, define testing requirements, and establish success criteria

  • Conduct comprehensive security assessments of AI systems, including text-based LLM applications and multimodal agentic systems

  • Author detailed security assessment reports with actionable findings and remediation recommendations

  • Present findings and strategic recommendations to technical and executive stakeholders through report readouts

Tooling & Methodology Development

  • Build upon and improve our established processes and playbooks to scale AI red teaming service delivery

  • Develop frameworks to ensure consistent, high-quality service delivery

  • Find the tedious, repetitive stuff and automate it - you don't need to be a world-class developer, just someone who can build tools that make the team more effective

Research & Innovation

  • Develop novel red teaming methodologies for emerging modalities: image, video, audio, autonomous systems

  • Stay ahead of the latest AI security threats, attack vectors, and defense mechanisms

  • Translate cutting-edge academic and industry research into practical testing approaches

  • Collaborate with our research and product teams to continuously level up our methodologies

Required Qualifications

Technical Expertise

  • 3+ years of experience in cybersecurity with focus on red teaming, penetration testing, or security assessments

  • Experience with web application and API penetration testing preferred

  • Deep understanding of LLM vulnerabilities including prompt injection, data poisoning, and jailbreaking techniques

  • Practical experience with threat modeling complex systems and architectures

  • Proficiency in developing automated tooling to enable and enhance testing capabilities, improve workflows, and deliver deeper insights

Professional Skills

  • Proven track record of leading client-facing security assessment projects from scoping through delivery

  • Excellent technical writing skills with experience creating executive-level security reports

  • Strong presentation and communication skills for diverse audiences

  • Experience building processes, documentation, and tooling for service delivery teams

AI Security Knowledge

  • Understanding of AI/ML model architectures, training processes, and deployment patterns

  • Familiarity with AI safety frameworks and alignment research

  • Knowledge of emerging AI attack surfaces including multimodal systems and AI agents

Preferred Qualifications

  • Relevant security certifications (OSCP, OSWA, BSCP, etc.)

  • Hands-on experience performing AI red teaming assessments, with a strong plus for experience targeting agentic systems

  • Demonstrated experience designing LLM jailbreaks

  • Active participation in security research and tooling communities

  • Background in threat modeling and risk assessment frameworks

  • Previous speaking experience at security conferences or industry events

What You'll Gain

  • Opportunity to shape the future of AI security as an emerging discipline

  • Work with cutting-edge AI technologies and novel attack methodologies

  • Lead high-visibility projects with enterprise clients across diverse industries

  • Collaborate with world-class research team pushing boundaries of AI safety

  • Platform to establish thought leadership in AI security community

  • Competitive compensation package with equity participation

???? Let's stay connected! Follow us on LinkedIn, Twitter & Instagram to learn more about what is happening at Lakera.

ℹ️ Join us on Momentum, the slack community for AI Safety and Security everything.

❗To remove your information from our recruitment database, please email privacy@lakera.ai.

+ Show Original Job Post
























AI Security Engineer - Red Team
Engineering
About Lakera
A provider of AI-powered risk management solutions designed to enhance cybersecurity and compliance in digital environments.