The Red Hat Ecosystems Engineering group is seeking a Principal Software Integration Engineer to bridge the gap between cutting-edge partner accelerator hardware and Red Hat's open-source software stack. This is a highly technical, "hands-on" role for an engineer who thrives at the intersection of the Linux kernel, virtualization, Kubernetes and high-performance networking.
Rather than focusing purely on application logic, you will be responsible for hardware enablement and system-level integration. You will ensure that GPUs, DPUs, and other edge accelerators are seamlessly orchestrated within Red Hat Enterprise Linux, Red Hat OpenShift and Virtualization environments and Red Hat AI platforms. If you are a "problem solver" who enjoys debugging the complex layers between a physical and a virtualized AI workloads, this role is for you.
At Red Hat, our commitment to open source innovation extends beyond our products - it's embedded in how we work and grow. Red Hatters embrace change - especially in our fast-moving technological landscape - and have a strong growth mindset. That's why we encourage our teams to proactively, thoughtfully, and ethically use AI to simplify their workflows, cut complexity, and boost efficiency. This empowers our associates to focus on higher-impact work, creating smart, more innovative solutions that solve our customers' most pressing challenges.
Assist with integration of partner accelerator hardware (GPUs, DPUs) into the Red Hat ecosystem, ensuring drivers, firmware, and orchestration layers work in harmony
Build and optimize solutions using KVM, QEMU, and libvirt to ensure high-performance hardware pass-through and abstraction
Design and implement robust networking paths using Advanced Software Defined Network (SDN) and Virtual Networking
Act as the technical bridge between hardware-level drivers and cloud-native platforms like Kubernetes and Red Hat OpenShift
Develop integration patterns and "well lit paths" for AI workloads, ensuring they meet strict performance and resiliency requirements
Work closely with Product Engineering, Partners, and Customers to root-cause complex issues across the entire stack (Hardware? Kernel? Hypervisor ? Container)
Create architectural blueprints and implementation guides for field engineers and lighthouse customers
Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling
7+ years of experience in system integration, infrastructure engineering, or specialized DevOps
Deep practical knowledge of server virtualization technology (ESX, Hyper-V, KVM)
Strong understanding of Software Defined Networking (SDN) concepts
Hands-on experience with Kubernetes, Podman, or Docker, specifically regarding how containers consume host resources and hardware
Advanced proficiency in Linux system administration, kernel modules, and hardware-software interfaces
A passion for "how things work together" and the ability to troubleshoot across multiple engineering domains (Network, Storage, Compute)
Ability to work closely with diverse teams and translate complex hardware requirements into software-defined solutions
Excellent system understanding and troubleshooting capabilities
Autonomous work ethic, thriving in a dynamic, fast-paced environment
Proficient written and verbal communication skills in English
The Following is Considered a Plus
Experience with cloud administration for public cloud services (AWS, GCE, Azure)
Deep practical knowledge of KVM, QEMU, and libvirt
Hands-on experience with advanced GPU and Networking configurations for multi-node AI workload orchestration and performance
Background in DevOps or site reliability engineering (SRE)
Experience with Operators and AI workload deployments (LLM, vLLM, inference, agents)
Recent hands-on experience with distributed computation, either at the end-user or infrastructure provider level
Experience with performance analysis tools