✨ About The Role
- The role involves analyzing vast amounts of data and providing user support within an AWS cloud environment.
- The candidate will be responsible for writing production-grade pipelines using big data technologies and data wrangling.
- The position requires collaboration with internal stakeholders to gather requirements and deliver solutions effectively.
- The candidate will act as a subject matter expert for Real World Evidence (RWE) data and represent the data commercially with customers.
- The job includes steering the technical strategy and architecture to ensure smooth development and scalability of applications.
⚡ Requirements
- The ideal candidate has over 3 years of experience working with big data engineering teams and deploying products on AWS.
- Proficiency in programming languages such as Python (PySpark), Java, or Scala is essential for success in this role.
- A strong understanding of ETL methodologies and experience with data processing technologies like Spark Streaming and Kafka Streaming is required.
- The candidate should be adept at managing projects through all stages, from requirement gathering to ongoing support.
- A proactive approach and the ability to write clean, modular data processing code are crucial for this position.