AI Platform Engineer
We are on a mission as a team. We are problem solvers and partners, always starting with our customers to solve their challenges and create opportunities. Our start-up roots keep us nimble, flexible, and moving fast. We take ownership and make decisions. We all work for one company and work together to drive growth across the business. We engage in robust debates to find the best path, and then we move forward as one team. We take pride in what we do, acting with integrity and passion, so that our customers can perform better. We are experts and enthusiasts - combining ever-expanding knowledge with leading technology to consistently deliver results, solutions and opportunities for our customers and stakeholders. Every day we work toward transforming global markets.
The AI Platform Engineer is responsible for the technical implementation, maintenance, and optimization of AI/ML infrastructure. This hands-on role focuses on GPU cluster deployment, container image management, platform tooling development, and deep technical troubleshooting. In addition, the engineer manages AI-enabled workflow automation tools capable of performing agentic actions, ensuring these systems operate efficiently and securely within a containerized architecture. This includes overseeing the deployment, monitoring, and scaling of automation solutions, as well as maintaining the supporting infrastructure. The engineer serves as the technical backbone of the AI Platform Operations team, translating architectural decisions into working infrastructure, and enabling advanced, automated workflows across the platform.
Responsibilities
- Deploy, configure, and maintain GPU clusters and associated infrastructure
- Designing, building, and maintaining the workflow automation platform that uses AI capabilities
- Manage NVIDIA driver versions, CUDA toolkits, and container runtimes
- Build and maintain approved container images with ML frameworks
- Implement monitoring, alerting, and observability for GPU infrastructure
- Develop automation and tooling to improve platform reliability and efficiency
- Provide L2/L3 technical support and vendor escalation for complex issues
- Implement security controls including network policies, RBAC, and secrets management
- Execute change requests and maintain technical documentation
- Respond to and assist in production operations in a 24/7 environment
- Provide technical analysis, resolve problems, and propose solutions
- Provide support to, and coordinate with, developers, operations staff, release engineers, and end-users
- Educate and mentor team members and operations staff
- Participate in a weekly on-call rotation for after-hours support
Knowledge and Experience
- 3+ years in infrastructure engineering, systems administration, or DevOps
- 3+ years in scripting and automation skills (Python, Ansible, GitOps)
- 3+ years hands-on experience with Kubernetes in production
- 2+ years' experience with Linux administration
- Direct experience with GPU infrastructure (NVIDIA preferred)
- 1+ years' experience using CUDA
- 1+ years' experience using MCPs
- 1+ years working with workflow/orchestrion automation tools
- Experience with enterprise monitoring and observability tools
- Ability to work in a service-oriented team environment
- Project Management, organization, and time management
- Customer focused, and dedicated to the best possible user experience
- Communicate effectively with both technical and business resources
- Fluent speaking, reading, and writing in English
Desired Knowledge and Experience
- 1+ years of experience with AI developer toolkits (NVIDIA drivers, CUDA, cuDNN, and NCCL)
- 1+ years of experience with Run:AI, NVIDIA AI Enterprise, or DGX systems
- 1+ years of experience with n8n
- 1+ years of experience with GitHub Actions
Intercontinental Exchange, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to legally protected characteristics.