View All Jobs 128860

Data Engineer

Design and implement scalable data pipelines for industrial predictive maintenance
Lesser Poland Voivodeship, Poland
Senior
yesterday
ABB

ABB

A global leader in power and automation technologies offering innovative solutions for industrial, utility, and infrastructure customers.

Data Engineer

At ABB, we help industries outrun - leaner and cleaner. Here, progress is an expectation - for you, your team, and the world. As a global market leader, we'll give you what you need to make it happen. It won't always be easy, growing takes grit. But at ABB, you'll never run alone. Run what runs the world.

ABB's Service Division partners with our customers to improve the availability, reliability, predictability and sustainability of electrical products and installations. The Division's extensive service portfolio offers product care, modernization, and advisory services to improve performance, extend equipment lifetime and deliver new levels of operational and sustainable efficiency. We help customers keep resources in use for as long as possible, extracting the maximum value from them, and then recovering and regenerating products and materials at the end of their useful life.

We are seeking a skilled and detail-oriented Data Engineer to design and implement robust data infrastructure solutions that enable advanced analytics and AI-driven insights for industrial asset management. This role involves building scalable data pipelines using Microsoft Fabric to consolidate, transform, and model data from multiple heterogeneous sources. The primary objective is to provide reliable, efficient, and scalable access to high-quality data that supports predictive maintenance analytics, risk assessment models, and strategic decision-making. You will be responsible for creating the data foundation that empowers data scientists and analysts to deliver actionable insights for optimizing maintenance strategies and enhancing operational efficiency.

The work model for the role is: hybrid #LI-hybrid

You will be mainly accountable for:

  • Design, develop, and maintain ETL/ELT pipelines in Microsoft Fabric for ingesting and transforming data from various sources including REST APIs, SQL Server, MuleSoft middleware, Snowflake, and file data sources (JSON, CSV, Excel, etc.).
  • Implement and manage dataflows, data pipelines, and Lakehouse models in Fabric to support advanced analytics and AI model development.
  • Develop and optimize data processing logic using PySpark within Microsoft Fabric notebooks for complex transformations and large-scale data processing tasks.
  • Build and maintain domain-driven data models that support analytics, reporting, self-service BI, and machine learning workflows.
  • Ensure data quality, integrity, and security across the entire data lifecycle, implementing robust data governance practices.
  • Collaborate with data scientists, analysts, software architects, and business stakeholders to understand requirements and deliver fit-for-purpose data solutions.
  • Monitor and troubleshoot pipeline performance, apply best practices in data architecture and performance optimization, and implement improvements as needed.
  • Document data processes, models, and technical decisions to ensure knowledge transfer and maintainability.

Qualifications for the role:

  • Advanced degree in Computer Science, Engineering, Data Science, or a related field (Master's preferred).
  • Proven experience (preferably 3+ years) as a Data Engineer with demonstrated expertise in building production-grade data pipelines and hands-on experience with Microsoft Fabric (Data Factory, Lakehouse, Dataflows).
  • Strong knowledge of ETL/ELT concepts, data pipeline design, and experience integrating data from diverse sources including APIs, databases (SQL Server), Snowflake, MuleSoft, and semi-structured formats.
  • Proficiency in SQL and Python, with experience in data processing frameworks and modern software development practices (Git, CI/CD, automated testing).
  • Familiarity with data modeling, data warehousing, domain-driven design, and experience with cloud platforms, ideally Azure.
  • Knowledge of data governance principles, Power BI semantic modeling, Delta Lake, or Synapse Analytics (preferred).
  • Experience with industrial data sources, time-series data, and IoT data streams.
+ Show Original Job Post
























Data Engineer
LocationKrakow, Lesser Poland, PolandFull_timeRegular
Engineering
About ABB
A global leader in power and automation technologies offering innovative solutions for industrial, utility, and infrastructure customers.