Location: Remote in USA
Key Responsibilities
Preferred 6+ years of experience
AWS Data Lake Architecture & Setup
Design and maintain scalable AWS Data Lakes using:
AWS Serverless Development
Develop serverless applications using:
Monitoring, Logging & Observability
Implement monitoring and alerting using:
Data Engineering & Spark
Develop ETL pipelines using PySpark on AWS Glue or EMR.
Optimize Spark workloads for performance, cost, and scaling.
Troubleshoot distributed data issues and tune job performance.
Python Development
Write clean Python code for Lambda functions, Glue jobs, and automation tools.
Develop shared internal libraries for ETL, monitoring, and data governance.
Implement automated CI/CD delivery pipelines.
DevOps & Cloud Platform Automation
Use Infrastructure-as-Code tools including:
The base salary range for this position is $92,250 - $131,786, plus incentives that align with individual and company performance. Actual salaries will vary based on work location, qualifications, skills, education, experience, and competencies.
Benefits available to eligible employees in this role include medical, dental, and vision insurance, comprehensive employee assistance program, 401(k) retirement plan, paid time off and holidays.
The deadline to apply for this position is: 03/02/2026. This position is for an existing, immediate vacancy. We are currently seeking to fill this role with an individual who can start as soon as possible.