Design, develop, and maintain efficient, reliable data pipelines and ETL processes using Microsoft Fabric or Databricks for large-scale data processing.
Collaborate with stakeholders to understand business requirements and create data solutions that align with technical and business strategies.
Optimize and improve workflows for data ingestion, transformation, and delivery to ensure high performance at scale.
Implement scalable architectures for big data processing, ensuring data quality, security, and governance best practices.
Develop and maintain reusable code to ensure consistency and reliability for future workflows.
Monitor and troubleshoot production environments, ensuring data systems perform as expected.
Stay current with emerging technologies, trends, and tools related to cloud-based data engineering platforms.
Required Skills: