View All Jobs 116089

:principal AI Test Engineer - Prisma Access

Lead AI-driven QA strategy across Prisma Access add-on services and mentor teams
Santa Clara, California, United States
Expert
$147,000 – 237,500 USD / year
yesterday
USA Jobs

USA Jobs

Provides a centralized online platform for searching and applying to employment opportunities across the United States.

Principal QA Test Engineer

At Palo Alto Networks®, we're united by a shared mission-to protect our digital way of life. We thrive at the intersection of innovation and impact, solving real-world problems with cutting-edge technology and bold thinking. Here, everyone has a voice, and every idea counts. If you're ready to do the most meaningful work of your career alongside people who are just as passionate as you are, you're in the right place.

In order to be the cybersecurity partner of choice, we must trailblaze the path and shape the future of our industry. This is something our employees work at each day and is defined by our values: Disruption, Collaboration, Execution, Integrity, and Inclusion. We weave AI into the fabric of everything we do and use it to augment the impact every individual can have. If you are passionate about solving real-world problems and ideating beside the best and the brightest, we invite you to join us!

We believe collaboration thrives in person. That's why most of our teams work from the office full time, with flexibility when it's needed. This model supports real-time problem-solving, stronger relationships, and the kind of precision that drives great outcomes.

We are seeking an AI-savvy Principal QA Test Engineer to transform how we test and validate Prisma Access Add-on Services (Remote Browser Isolation, Application Acceleration, Application Security, and Privileged Remote Access). As a technical leader, you will design and implement autonomous QA workflows that leverage AI to achieve unprecedented test coverage, efficiency, and defect detection. You will take ownership of quality outcomes, mentor teams, and drive innovation in how we approach testing at scale.

AI-Driven Testing & Autonomous Workflows

Design and implement autonomous QA workflows using AI agents and LLM-based testing frameworks to achieve continuous, intelligent test execution

Build AI-powered test generation systems that automatically create comprehensive test suites from requirements, specifications, and production telemetry

Develop intelligent test oracles using LLMs to validate complex system behaviors, API responses, and user experiences beyond traditional assertions

Create AI-assisted defect prediction and prevention systems that proactively identify reliability and security risks before they reach production

Implement agentic testing workflows that autonomously explore application state spaces, identify edge cases, and generate regression tests

Quality Engineering & Measurable Outcomes

Own and drive key quality metrics: Defect Containment Effectiveness (DCE >95%), Customer Found Regression (CFR 30%)

Lead root cause analysis (RCA) for production incidents and customer-found defects, implementing durable fixes that reduce MTTR and defect escape rates

Drive systematic quality improvements through data-driven insights, reducing vulnerability remediation time and production incident frequency

Participate in system design to ensure quality, observability, and testability are built-in throughout the Prisma Access feature development lifecycle

Test Infrastructure & Platform Development

Develop and enhance AI-augmented test infrastructure that enables scalable, flexible, and @context-aware testing reflecting real-world deployment scenarios

Build RAG (Retrieval-Augmented Generation) pipelines for test knowledge bases, enabling intelligent test selection and prioritization

Design shared testing platforms and patterns for multi-dimensional testing: functional, scale, performance, resiliency, security, and chaos engineering

Integrate AI models and prompts into CI/CD pipelines for continuous quality assessment and intelligent test orchestration

Technical Leadership & Collaboration

Provide technical leadership in browser security, cloud orchestration, distributed systems, and AI-assisted quality engineering

Mentor and upskill team members on AI/ML testing techniques, prompt engineering for test automation, and modern quality practices

Collaborate with Development, SRE, Product Management, and Technical Marketing to align quality strategy with business outcomes

Lead design discussions and articulate technical trade-offs clearly to cross-functional stakeholders

Continuous Learning & Innovation

Stay current with AI/ML advancements and translate them into practical testing innovations (e.g., agentic workflows, multimodal testing, AI-powered observability)

Experiment with emerging AI testing tools and frameworks; share findings and drive team adoption of proven practices

Leverage customer deployment data and telemetry to enhance test strategies and improve CFD efficacy

Qualifications

Your Experience

10+ years of experience in QA/Test Automation Engineering with demonstrated impact on product quality and team practices

3+ years of hands-on experience with AI/ML technologies, including LLMs, prompt engineering, and AI-assisted development workflows

Proven track record of building autonomous testing systems or AI-powered quality engineering tools

Deep expertise in cybersecurity, cloud networking, or distributed systems testing

Strong proficiency in Python and/or Go for test automation and AI workflow development

Experience with LLM frameworks (LangChain, LlamaIndex, AutoGen) and AI model integration

Expertise in REST API testing, web UI automation (Selenium, Playwright, Puppeteer), and cloud-native application testing

Technical Skills

Hands-on experience with AI-powered test generation, intelligent test selection, and autonomous test execution

Experience building RAG pipelines, vector databases, and knowledge graphs for test intelligence

Strong understanding of prompt engineering, few-shot learning, and fine-tuning for testing use cases

Proficiency with observability platforms (Prometheus, Grafana, Splunk) and log analysis using AI

Experience with cloud providers (AWS, Azure, GCP) and infrastructure-as-code (Terraform, CloudFormation)

Knowledge of microservices architecture, distributed systems testing, and performance optimization

Experience with test management systems (TestRail) and defect tracking (JIRA)

Preferred Experience

Experience with browser security solutions: enterprise browsers, remote browser isolation, browser extensions

Background in building AI agents for software testing or autonomous DevOps workflows

Experience with multi-agent systems and orchestration frameworks

Knowledge of security testing, penetration testing, or vulnerability assessment automation

Contributions to open-source testing frameworks or AI/ML testing tools

Experience measuring and improving quality metrics (DCE, CFR, ADDR, MTTR, defect escape rate)

Education

M.S./B.S. degree in Computer Science, Electrical Engineering, or equivalent military experience required

EXPECTATIONS AT PRINCIPAL LEVEL

Scope & Complexity

Given ambiguous goals ("reduce customer-found defects," "improve test coverage"), you independently define the problem, build comprehensive plans, and deliver measurable outcomes

Own service-level quality outcomes across multiple Prisma Access add-on services

Lead multi-team quality initiatives and manage dependencies across Engineering, Product, and SRE

Technical Depth

Anticipate subtle failures in complex, AI-assisted systems; proactively audit for security gaps and reliability risks

Design test architectures that span multiple services and define API testing contracts

Integrate AI @context pipelines into testing workflows; tune models and prompts for accuracy and latency

Leadership & Influence

Mentor junior and mid-level engineers on AI testing practices and quality engineering

Lead adoption of new testing practices (e.g., agentic workflows, AI-assisted RCA) and measure impact

Collaborate effectively across teams and influence peers without authority to align on quality standards

Time Horizon & Impact

Focus extends beyond immediate execution to the operational roadmap and lifecycle of testing services

Drive improvements that compound over multiple release cycles, measurably improving quality metrics quarter over quarter

Compensation Disclosure

The compensation offered for this position will depend on qualifications, experience, and work location. For candidates who receive an offer at the posted level, the starting base salary (for non-sales roles) or base salary + commission target (for sales/com-missioned roles) is expected to be the annual range listed below. The offered compensation may also include restricted stock units and a bonus.

$147,000.00 - $237,500.00/yr

Our Commitment

We're trailblazers that dream big, take risks, and challenge cybersecurity's status quo. It'

+ Show Original Job Post
























:principal AI Test Engineer - Prisma Access
Santa Clara, California, United States
$147,000 – 237,500 USD / year
Engineering
About USA Jobs
Provides a centralized online platform for searching and applying to employment opportunities across the United States.