Senior Software Test Engineer
At PNNL, our core capabilities are divided among major departments that we refer to as Directorates within the Lab, focused on a specific area of scientific research or other function, with its own leadership team and dedicated budget.
Our Science & Technology directorates include National Security, Earth and Biological Sciences, Physical and Computational Sciences, and Energy and Environment. In addition, we have an Environmental Molecular Sciences Laboratory, a Department of Energy, Office of Science user facility housed on the PNNL campus.
The National Security Directorate (NSD) drives science-based, mission-focused solutions to take on complex, real-world threats to our nation and the world.
The AI and Data Analytics Division, part of NSD, combines profound domain expertise and creative integration of advanced hardware and software to deliver computational solutions that address complex data and analytic challenges. Working in multidisciplinary teams, we connect foundational research to engineering to operations, providing the tools to innovate quickly and field results faster. Our strengths are integrated across the data analytics lifecycle, from data acquisition and management to analysis and decision support.
We are seeking a Senior Software Test Engineer to join PNNL's TestOps team, leading quality engineering efforts that assure the reliability, performance, and mission readiness of innovative systems spanning agentic AI platforms, large-scale data orchestration, and real-time intelligence processing. This is an excellent opportunity for experienced test engineers to lead test strategy and execution across teams, mature automated quality practices, and drive end-to-end system validation across APIs, data pipelines, and production-like environments supporting mission-critical national security applications.
You're a senior test engineer with 5+ years of experience developing and owning test strategies, test artifacts (test plans, test cases, and test reports), and building/maintaining scalable test automation. You routinely lead cross-functional quality efforts, translate ambiguous requirements into measurable acceptance criteria, and use risk-based approaches to define release readiness. You're known for strong debugging and systems thinking, and you communicate clearly with both technical and non-technical stakeholders. You mentor others, raise the bar on engineering quality, and help teams ship confidently.
Test Strategy, Planning, and Quality Leadership
- Own and lead end-to-end test strategy for programs and/or multiple components, including scope, coverage goals, environments, release criteria, and quality gates
- Partner with engineering, product, and stakeholders to refine requirements into testable acceptance criteria, traceability, and measurable quality outcomes
- Drive risk-based test planning across functional, integration, system, regression, and performance testing; prioritize quality work based on mission impact
- Produce executive-ready quality summaries (coverage, results, trends, residual risk) and recommend go/no-go decisions with clear rationale
- Establish and improve test standards (definitions of done, test naming/structure, flake management, reporting expectations) across the team
Team Leadership, Mentorship, and Execution
- Lead and coordinate testing efforts for a small team or a complex initiative; align work across developers, DevOps/platform teams, and data science partners
- Mentor Level 1–2 engineers in test design, automation patterns, debugging, and quality communication; provide guidance through reviews and pairing
- Champion a culture of quality: shift-left practices, defect prevention, and pragmatic automation that increases delivery speed and confidence
- Influence roadmaps by identifying quality risks early and proposing mitigation plans (technical and process)
Test Automation, CI/CD, and Engineering Excellence
- Architect and maintain automation across API, UI, integration, end-to-end, and regression layers; emphasize reliability and maintainability
- Implement and standardize automated tests using Cypress.io, Playwright, or similar frameworks; drive down flakiness with robust test design and environment controls
- Integrate test tooling into CI/CD pipelines (e.g., GitLab/GitHub) with reporting, metrics, and enforced quality gates
- Validate workflows across APIs, databases, pipelines, and services using SQL and/or GraphQL where appropriate; ensure automation provides actionable diagnostics
AI/ML and Data-Intensive System Validation (Leadership Focus)
- Lead validation approaches for models, data, and end-to-end AI workflows, including strategies for non-deterministic outputs and regression detection
- Define and operationalize AI quality attributes (accuracy, precision/recall, relevance, bias/fairness, robustness/consistency) and verify guardrails/safety/explainability expectations
- Drive data quality validation (completeness, correctness, drift, representativeness, label quality) and incorporate checks into pipelines and release processes
- Partner with engineers/data scientists to design repeatable evaluation harnesses and automated regression approaches aligned to mission needs
- Work with AI agents/skills and MCP servers to support test automation workflows and system validation at scale
Platform, Cloud, and Reliability Readiness
- Lead quality practices for cloud and containerized deployments; apply strong working knowledge of cloud concepts (AWS/Azure) and container tooling (Docker/Podman, Kubernetes fundamentals)
- Use observability (logs/metrics/traces) to debug failures, validate monitoring coverage, and improve testability and operational readiness
- Drive performance and reliability validation (latency, scalability, stability) and ensure results feed back into design decisions and release gates
Stakeholder Partnership and Continuous Improvement
- Serve as a quality leader with end users and stakeholders to prototype, configure, refine, verify, and troubleshoot systems to meet intended use
- Evaluate and introduce new test tools/technologies; build adoption plans and standards that improve quality outcomes and team efficiency
- Establish quality metrics and trend reporting (defect escape rate, test effectiveness, flake rate, automation ROI) and use them to guide improvements
Collaboration & Professional Growth
- Lead technical discussions and reviews around test strategy, design, and implementation; influence architecture for testability
- Communicate complex technical risk clearly—written and verbal—tailored to engineers, leadership, and mission stakeholders
- Incorporate feedback from defects and incidents to drive preventive actions and measurable improvements across teams and releases
This position is based in Richland, WA or Seattle, WA and requires an onsite presence Monday through Thursday, with Friday as required by business needs.
Minimum Qualifications:
- PhD and 1 year of Software Engineering experience -OR-
- MS/MA and 3 years of Software Engineering experience -OR-
- BS/BA and 5 years of Software Engineering experience -OR-
- AA and 14 years of Software Engineering experience in designing, architecting, programming, deploying, and automating software solutions in support of scientific research or consumer digital product development -OR-
- HS/GED and 16 years of Software Engineering experience in designing, architecting, programming, deploying, and automating software solutions in support of scientific research or consumer digital product development
Preferred Qualifications:
- Degree in Computer Science, Software Engineering, or a related field
- Experience implementing automated tests using Cypress.io, Playwright, or similar testing frameworks
- Experience using AI-assisted development tools within an IDE, such as VS Code, to write automated tests and troubleshoot issues
- Experience in JavaScript, Python programming languages
- Knowledgeable in using SQL or GraphQL
- Experience developing software test plans, test cases, and test reports
- Knowledge of software engineering best practices and software development lifecycles
- Experience with DevOps and MLOps, including automated tests within CI/CD processes such GitLab or GitHub
- 1+ years of experience using AI tools (i.e. Cline, Roo Code, etc.) within an IDE to write automated tests and/or troubleshoot issues
- Familiarity with AI models such as Claude, Co-Pilot. Knowledgeable in using MCP servers, AI skills, and AI agents
- Experience in validating models, data, and end-to-end workflows/integrations (APIs, databases, pipelines) using data/model validation plus integration, E2E, and regression testing, including handling non-deterministic outputs and real-world/edge/failure scenarios
- Experience in assessing AI quality attributes (accuracy, precision