You will be part of a multidisciplinary team of experts focused on proactively identifying, simulating, and mitigating threats to AI systems across UHG. You will be contributing to strengthen the strategy for adversarial and non-adversarial testing of LLM models, ensuring their robustness, fairness, and security. This role requires a unique blend of deep technical expertise and a forward-looking mindset to anticipate emerging risks in AI technologies.