✨ About The Role
- The role involves designing, developing, and maintaining scalable data pipelines and architectures to support the company's mission.
- Responsibilities include implementing data ingestion, transformation, and storage solutions while ensuring data quality and integrity.
- The candidate will collaborate with various teams to meet data requirements and optimize workflows for performance and efficiency.
- Monitoring, troubleshooting, and resolving issues in data pipelines and related systems will be a key part of the job.
- Documentation of data processes, architecture, and workflow procedures is also required to maintain clarity and consistency in operations.
âš¡ Requirements
- The ideal candidate will have over 5 years of proven experience in data engineering or a similar role, demonstrating a strong technical background.
- Proficiency in SQL and experience with relational and time series databases are essential for success in this position.
- Hands-on experience with big data technologies such as Hadoop, Spark, and Kafka is highly valued.
- The candidate should possess strong programming skills in Python and familiarity with tools like Databricks and Grafana.
- Excellent problem-solving skills and attention to detail are crucial for monitoring and troubleshooting data pipelines effectively.
- Strong communication and collaboration skills are necessary to work with software engineers, data scientists, and other stakeholders.