Location: May work remotely from anywhere in the U.S. (HQ: Aurora, CO)
Job Duties:
Design, build and execute data pipelines
Build the configurable ETL framework
Use engineering best practices following an Agile methodology to deliver high-quality emerging tech solutions
Analyze and profile data to construct best in class data models
Optimize SQL queries to maximize system performance
Provide guidance and support to junior developers
Troubleshoot bugs and data quality issues
Collaborate with departments across the organization
Elicit and document technical requirements
Utilize agile project management software
Oversee and execute CI/CD pipeline (continuous integration and continuous deployment)
Education and Experience Required:
Bachelor of Science degree in Data Science, Computer Science, Information Technology or a closely related technical field
5 years of data engineering experience
Background:
Five (5) years in: Modern database technologies; Modern programming languages such as Python or Java; SDLC experience in an Agile environment; APIs, CI/CD, Big Data, data architecture and governance; Cloud technologies and platforms such as GCP, Docker, Kubernetes, AWS, Snowflake, and Azure; Transforming data using SQL and communicating info through data visualization and reporting; Loading (FCS) flow cytometry standard data; File formats like CSV, Parquet, and Json; Ingesting and engineering with large and complex healthcare datasets; Gathering requirements from business and end users and develop technical design of new systems; Jenkins, GitHub, Jira, and Big Data technologies such as Spark or similar technology; Using IDEs such as PyCharm, Eclipse, JBoss, and IntelliJ; Relational database experience.
Rate of Pay: $150,000 to $160,000 per year
How to Apply: Email Careers@RefinedScience.com with job ref #SDE in subject heading