Streaming Data Platform Engineer
The Streaming Data Platform team is responsible for building, managing complex stream processing topologies using the latest open-source tech stack, build metrics and visualizations on the generated streams and create varied data sets for different forms of consumption and access patterns. We're looking for a seasoned Staff Software Engineer to help us build and scale the next generation of streaming platforms and infrastructure at Fanatics Commerce.
Responsibilities
- Build data platforms and streaming engines that are real-time and batch in nature
- Optimize existing data platforms and infrastructure while exploring other technologies
- Provide technical leadership to the data engineering team on how data should be stored and processed more efficiently and quickly at scale
- Build and scale stream & batch processing platforms using the latest open-source technologies
- Work with data engineering teams and help with reference implementation for different use cases
- Improve existing tools to deliver value to the users of the platform
- Work with data engineers to create services that can ingest and supply data to and from external sources and ensure data quality and timeliness
Qualifications
- 8+ years of software development experience with at least 3+ years of experience on open-source big data technologies
- Knowledge of common design patterns used in Complex Event Processing
- Knowledge in Streaming technologies: Apache Kafka, Kafka Streams, KSQL, Spark, Spark Streaming
- Proficiency in Java, Scala
- Strong handson experience in SQL, Hive, Spark SQL, Data Modeling, Schema design.
- Experience and deep understanding of traditional, NoSQL and columnar databases
- Experience of building scalable infrastructure to support stream, batch and micro-batch data processing
- Experience utilizing Apache Iceberg as the backbone of a modern lakehouse architecture, supporting schema evolution, partitioning, and scalable data compaction across petabyte-scale datasets.
- Experience utilizing AWS Glue as a centralized data catalog to register and manage Iceberg tables, enabling seamless integration with real-time query engines and improving data discovery across distributed systems.
- Experience working with Druid/StarRocks/Apache Pinot etc., powering low-latency queries, routine Kafka ingestion, and fast joins across both historical and real-time data.
Location / Hybrid work environment with flexibility between 3 days in office and 2 days remote work, based out of our San Mateo, Boulder or Atlanta offices.
The salary for this position is between $144,000 and $234,000 per year, in compliance with California's salary transparency requirements. This range reflects the expected compensation based on qualifications, experience, and location. In addition to the salary, we offer a comprehensive benefits package, including: Health, dental, and vision insurance, 401(k) plan with company match, Paid time off and holidays, Professional development opportunities, Flexible work arrangements.