View All Jobs 156945

Research Scientist/engineer, Model Threat Defense

Develop comprehensive defense strategies for AI model security
Mountain View, California, United States
20 hours agoBe an early applicant
Veterans Staffing

Veterans Staffing

A platform dedicated to connecting U.S. military veterans with employment opportunities and career development resources.

53 Similar Jobs at Veterans Staffing

Google DeepMind Model Defense Scientist

Snapshot Artificial Intelligence could be one of humanity's most useful inventions. At Google DeepMind, we're a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role As part of the Security & Privacy Research Team at Google DeepMind, you will take on a holistic role in securing our AI assets. You will both identify unauthorized distillation attempts and actively harden our models against distillation. This is a unique opportunity to contribute to the full lifecycle of defense for the Gemini family of models. You will be at the forefront detecting threats in the wild and building resilience into our models.

Key Responsibilities

  • Research Defense Strategies
  • Deploy Detection & Mitigation Systems
  • Evaluate Impact
  • Collaborate and Publish

About You We are looking for a creative and rigorous research scientist, research engineer, or software engineer who is passionate about trailblazing the critical field of model defense. You thrive on ambiguity and are comfortable working across the spectrum of security—from thinking like an adversary to building proactive protections. You are driven to build robust systems that protect the future of AI development.

Minimum qualifications:

  • Ph.D. in Computer Science or a related quantitative field, or a B.S./M.S. in a similar field with 2+ years of relevant industry experience.
  • Demonstrated research or product expertise in a field related to model security, adversarial ML, post-training, or model evaluation.
  • Experience designing and implementing large-scale ML systems or counter-abuse infrastructure.

Preferred qualifications:

  • Deep expertise in one or more of the following areas: model distillation, model stealing, security, memorization, Reinforcement Learning, Supervised Fine-Tuning, or Embeddings.
  • Proven experience in Adversarial Machine Learning, with a focus on designing and implementing model defenses.
  • Strong software engineering skills and experience with ML frameworks like JAX, PyTorch, or TensorFlow.
  • A track record of landing research impact or shipping production systems in a multi-team environment.
  • Current or prior US security clearance.

The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process. At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

+ Show Original Job Post
























Research Scientist/engineer, Model Threat Defense
Mountain View, California, United States
Engineering
About Veterans Staffing
A platform dedicated to connecting U.S. military veterans with employment opportunities and career development resources.