View All Jobs 136320

Principal Engineer Machine Learning (mlops DLP Detection)

Design and implement production-grade MLOps pipelines and deploy scalable DLP detection models.
San Francisco Bay Area
Expert
$175,000 – 220,000 USD / year
yesterday
Palo Alto Networks

Palo Alto Networks

Provides advanced cybersecurity platforms and cloud-delivered security services to protect networks, endpoints, and applications across hybrid environments.

213 Similar Jobs at Palo Alto Networks

Principal Engineer Machine Learning (MLOps DLP Detection)

We are looking for a Principal MLOps Engineer to lead the design, development, and operation of production-grade machine learning infrastructure at scale. In this role, you will architect robust pipelines, deploy and monitor ML models, and ensure reliability, reproducibility, and governance across our AI/ML ecosystem. You will work at the intersection of ML, DevOps, and cloud systems, enabling our teams to accelerate experimentation while ensuring secure, efficient, and compliant deployments.

This role is located at our dynamic Santa Clara California headquarters campus, and in office 3 days a week. Not a remote role.

Your impact includes:

  • End-to-End ML Architecture and Delivery Ownership: Architect, design, and lead the implementation of the entire ML lifecycle. This includes ML model development and deployment workflows that seamlessly transition models from initial experimentation/development to complex cloud and hybrid production environments.
  • Operationalize Models at Scale: Develop and maintain highly automated, resilient systems that enable the continuous training, rigorous testing, deployment, real-time monitoring, and robust rollback of machine learning models in production, ensuring performance meets massive scale demands.
  • Ensure Reliability and Governance: Establish and enforce state-of-the-art practices for model versioning, reproducibility, auditing, lineage tracking, and compliance across the entire model inventory.
  • Drive Advanced Observability & Monitoring: Develop comprehensive, real-time monitoring, alerting, and logging solutions focused on deep operational health, model performance analysis (e.g., drift detection), and business metric impact.
  • Champion Automation & Efficiency: Act as the primary driver for efficiency, pioneering best practices in Infrastructure-as-Code (IaC), sophisticated container orchestration, and continuous delivery (CD) to reduce operational toil.
  • Collaborate and Lead Cross-Functionally: Partner closely Security Teams, and Product Engineering to define requirements and deliver robust, secure, and production-ready AI systems.
  • Lead MLOps Innovation: Continuously evaluate, prototype, and introduce cutting-edge tools, frameworks, and practices that fundamentally elevate the scalability, reliability, and security posture of our production ML operations.
  • Optimize Infrastructure & Cost: Strategically manage and optimize ML infrastructure resources to drive down operational costs, improve efficiency, and reduce model bootstrapping times.

Your experience includes:

  • 8+ years of software/DevOps/ML engineering experience, with at least 3+ years focused specifically on advanced MLOps, ML Platform, or production ML infrastructure and 5+ years of experience building ML Models
  • Deep expertise in building scalable, production-grade systems using strong programming skills (Python, Go, or Java).
  • Expertise in leveraging cloud platforms (AWS, GCP, Azure) and container orchestration (Kubernetes, Docker) for ML workloads.
  • Proven hands-on experience in the ML Infrastructure lifecycle, including: Model Serving (TensorFlow Serving, TorchServe, Triton Inference Server/TIS). Workflow Orchestration (Airflow, Kubeflow, MLflow, Ray, Vertex AI, SageMaker).
  • Mandatory Experience with Advanced Inferencing Techniques: Demonstrable ability to utilize advanced hardware/software acceleration and optimization techniques, such as TensorRT (TRT), Triton Inference Server (TIS), ONNX Runtime, Model Distillation, Quantization, and pruning.
  • Strong, hands-on experience with comprehensive CI/CD pipelines, infrastructure-as-code (Terraform, Helm), and robust monitoring/observability solutions (Prometheus, Grafana, ELK/EFK stack).
  • Comprehensive knowledge of data pipelines, feature stores, and high-throughput streaming systems (Kafka, Spark, Flink).
  • Expertise in operationalizing ML models, including model monitoring, drift detection, automated retraining pipelines, and maintaining strong governance and security frameworks.
  • A strong track record of influencing cross-functional stakeholders, defining organizational best practices, and actively mentoring engineers at all levels.
  • Unwavering passion for operational excellence, building highly scalable, and securing mission-critical ML systems.
  • MS/PhD in Computer Science/Data Science, Engineering
+ Show Original Job Post
























Principal Engineer Machine Learning (mlops DLP Detection)
San Francisco Bay Area
$175,000 – 220,000 USD / year
Engineering
About Palo Alto Networks
Provides advanced cybersecurity platforms and cloud-delivered security services to protect networks, endpoints, and applications across hybrid environments.