Specialist, Data Engineer
The Specialist, Data Engineer is a fully competent individual contributor within McKesson's Decision Intelligence organization, responsible for designing, building, and maintaining scalable data pipelines and analytics solutions. This role operates independently with limited direction and partners closely with cross-functional teams to deliver reliable, high-quality data products that enable business insights.
Design, build, and maintain batch and near-real-time data pipelines supporting analytics and reporting use cases.
Translate business and analytical requirements into technical data engineering solutions.
Develop, optimize, and maintain data transformations, models, and datasets for downstream consumption.
Write and maintain complex SQL queries and Python-based data processing logic.
Collaborate with Data Architects, Product Managers, Analysts, and Data Scientists to support analytics and ML initiatives.
Ensure data quality, reliability, and performance across data pipelines and platforms.
Implement testing, monitoring, and basic observability for data workflows.
Troubleshoot data issues and support production data pipelines as needed.
Contribute to documentation, standards, and best practices within the data engineering team.
Follow enterprise data, security, and governance standards in all solutions.
Strong hands-on experience building and supporting data pipelines in enterprise environments.
Ability to work independently on complex tasks with minimal supervision.
Solid understanding of data modeling, ETL/ELT patterns, and analytics enablement.
Experience partnering with technical and business stakeholders to deliver data solutions.
Proficiency in SQL and Python.
Hands-on experience with modern data platforms and tools such as:
Databricks, Snowflake, Azure Data Factory
PySpark, analytical SQL
Power BI and/or Tableau
Apache Airflow and/or dbt
Experience working with structured and semi-structured data.
Familiarity with cloud platforms (Azure preferred).
Understanding of data quality, testing, and performance optimization concepts.
Degree or equivalent and typically requires 4+ years of relevant experience.
Experience supporting advanced analytics or data science workloads.
Exposure to real-time or event-driven data pipelines (Kafka / Event Hub).
Familiarity with data governance, metadata, or lineage tools.
Experience in large, matrixed enterprise environments