✨ About The Role
- The role involves evaluating technologies, developing proofs of concept, and solving technical challenges to propose innovative solutions.
- The candidate will be responsible for building high-quality, scalable, and reliable business applications that delight stakeholders.
- Designing, building, and maintaining streaming and batch data pipelines that can scale is a key responsibility.
- The position requires architecting and maintaining a modern lake house platform using AWS native infrastructure.
- The job includes taking ownership of scaling, performance, security, and reliability of the data infrastructure.
- The candidate will work in an agile development environment and participate in code reviews.
- Collaboration with remote development teams and cross-functional teams is essential for success in this role.
âš¡ Requirements
- The ideal candidate will have over 3 years of experience in designing, developing, and maintaining data engineering and BI solutions.
- A strong background in data modeling for big data solutions is essential for success in this role.
- Proficiency in Spark and Spark Structured Streaming, particularly with PySpark or Scala Spark, is required.
- The candidate should have experience with database technologies such as Redshift or Trino.
- Familiarity with BI solutions like Looker, Power BI, Amazon Quicksight, or Tableau is a significant advantage.
- The successful applicant will be self-driven and possess strong communication skills, with the ability to lead and mentor junior engineers.
- Experience in performance tuning complex data warehouses and queries is crucial for this position.
- Knowledge of Kafka and related technologies will be considered a valuable asset.