✨ About The Role
- The role involves pioneering new frameworks for data extraction and delivery from various sources.
- The engineer will build core technologies to manage high-volume data connectors for customers.
- Responsibilities include crafting a platform for reliable data movement and normalization.
- The engineer will co-develop operations for the new product alongside existing experts.
- The position requires driving improvements in performance and reliability at scale.
âš¡ Requirements
- The ideal candidate has over 6 years of software engineering experience, with at least 2 years specifically in Python.
- A strong understanding of data pipeline stacks, such as Postgres, Snowflake, or Kafka, is essential.
- Familiarity with AI and LLM tools, as well as the pipelines that support them, is highly desirable.
- The candidate should have a proven track record of optimizing high-performance applications.
- A user-centric approach to product development and a passion for continuous learning are crucial for success in this role.