✨ About The Role
- The role involves creating data pipelines to handle both batch and streaming ETLs.
- The candidate will be responsible for implementing efficient solutions to enrich existing data workflows.
- They will create software and technology that will serve as the foundation for future systems.
- Identifying and reporting data resilience issues to key stakeholders is a critical part of the job.
- The position requires collaboration with various teams within Arketa to ensure data accuracy and compliance.
- The role offers opportunities for weekly workouts with the team and includes health, dental, and vision coverage.
- Unlimited PTO and sick leave are also part of the benefits package.
âš¡ Requirements
- The ideal candidate is an ambitious Data Engineer with a strong desire to grow and work with customer data.
- They should be engineering-driven and capable of translating business and data requirements into production-ready stacks.
- A passion for data is essential, as the candidate will champion data both internally and to customers.
- The successful individual will have experience with batch data processing systems and be confident in providing data-focused solutions.
- Familiarity with stacks such as BigQuery, DataFlow, and Apache Airflow is important for this role.
- Experience with NoSQL databases, particularly Firestore and/or MongoDB, is also a key requirement.
- The candidate should be able to work cross-functionally with all Arketa teams to shape data collection and compliance processes.