View All Jobs 125254

Lead Data Engineer - Data Modeling

Build the data backbone and pipelines to support operational learning across SRE and Support
Plano, Texas, United States
Senior
9 hours agoBe an early applicant
JPMorgan Chase

JPMorgan Chase

Global financial services firm providing investment banking, asset management, commercial banking, and consumer financial products worldwide.

121 Similar Jobs at JPMorgan Chase

Lead Data Engineer

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorganChase within the Enterprise Technology - CTO SRE & Support team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm's business objectives.

You are a technical builder with strong data modeling instincts to build the data backbone for an operational learning capability in a complex support and SRE environment. You will connect and model data from incidents, RCA outputs, problem records, support tickets, customer signals, and related telemetry to surface recurring patterns, identify systemic drivers, and produce actionable handoffs to prevention and readiness teams. The role goes beyond dashboards: it requires workflow-aware data modeling, pragmatic delivery, and comfort working with heterogeneous, imperfect operational data. Partnering closely with leaders across Support, SRE, and Engineering, you will deliver lightweight, durable data products that strengthen institutional learning, improve executive visibility, and enable proactive reliability improvements in a blameless, learning-oriented environment. Success demands hands-on technical depth, comfort with ambiguity, and the judgment to start with minimally sufficient solutions that evolve through use.

Job Responsibilities

  • Design and implement a minimum viable data model that links incident, RCA, problem, ticketing, customer signals, and observability data for the review function.
  • Build and maintain robust pipelines and transformations that expose repeat patterns, operational toil themes, and systemic issue categories across sources.
  • Develop lightweight, workflow-supporting data products that turn operational events into actionable learning and clear handoffs for downstream owners.
  • Partner with support, SRE, and operational leaders to define required data fields, taxonomies, classifications, and handoff structures that make review outputs actionable and measurable.
  • Design mechanisms to distinguish one-off incidents from recurring classes of failure or avoidable demand, enabling detection of recurrence and informed prioritization.
  • Establish practical data quality standards, field definitions, and lightweight governance (e.g., lineage, stewardship, access) for operational learning datasets across multiple sources.
  • Safeguard blameless review practices by ensuring outputs promote learning and improvement rather than punitive reporting; embed blameless learning norms into data and workflow design.
  • Translate loosely defined operational problems into structured datasets, dashboards, and decision-support tools with clear business and engineering value.
  • Document data models, assumptions, transformation logic, and operating procedures to support maintainability, transparency, and long-term scale.
  • Build solutions that can start manual or semi manual and progressively automate as process maturity grows, integrating with enterprise systems (e.g., ServiceNow, Jira) over time.
  • Create decision-useful reporting, visualizations, and leadership-ready views on repeated high-impact issues, emerging pain themes, action status, and systemic trends, including service health metrics (e.g., MTTD, MTTR) to support prioritization, backlog visibility, ownership/SLA tracking, and escalation of repeated high impact patterns without creating reporting overhead.

Required Qualifications, Capabilities, and Skills

  • Formal training or certification with 5+ years in professional data engineering roles in cloud-based environments.
  • Data engineering in operational domains: Proven experience building models and pipelines with SQL/Python across heterogeneous incident, ticketing, RCA, and telemetry sources; comfortable with imperfect or partial data.
  • Data quality and pragmatic governance: Field normalization, standards, and lineage practices that scale across sources without slowing delivery.
  • Blameless workflow design: Ability to design data and workflow outputs that support learning and improvement rather than punitive reporting.
  • Investigative rigor: Ability to reconstruct precise event timelines across systems and maintain strong evidence integrity in operational analyses.
  • Evidence integrity: Experience producing auditable, versioned datasets and reproducible analyses; clearly separates facts, interpretations, and hypotheses in artifacts and reviews.
  • Classification design: Experience designing taxonomies and controlled vocabularies that enable consistent classification and actionability across operational data.
  • Enterprise workflow integration: Integrates with enterprise platforms (e.g., ticketing/incident systems) and defines data fields, handoffs, and action-tracking structures that convert review outputs into owned, trackable work.
  • Incremental delivery mindset: Starts with minimally sufficient solutions and iterates toward greater automation; adapts under pressure and navigates evolving requirements while keeping stakeholders aligned.
  • Structured synthesis: Clear documentation of assumptions and logic; conducts structured, non-leading SME/operator interviews and synthesizes qualitative inputs into structured data.
  • Decision-useful reporting: Builds executive- and operator-facing dashboards and decision-support views tightly linked to prioritization, ownership, governance decisions, and measurable outcomes rather than volume reporting.

Preferred Qualifications, Capabilities, and Skills

  • Direct experience with SRE, incident/problem management, RCA methods and techniques, service health metrics (e.g., MTTD, MTTR), and post-incident reviews.
  • Applied use of LLMs/agents, RAG, anomaly detection, or automated runbooks to accelerate evidence collection, summarization, and action routing in review workflows.
  • Familiarity with structured methods used in high-reliability investigations (e.g., Bowtie/AcciMap/STPA), peer review/checklists, cross-source corroboration, cognitive bias mitigation (e.g., confirmation, hindsight, outcome bias), and evidence-handling practices such as immutable log retention, event timestamping, query capture, and "docket"-style evidence packages suitable for leadership reviews and audits.
  • Experience with modern cloud data platforms and workflow orchestration (e.g., warehouses/lakehouses, streaming, Airflow/Prefect/dbt) and integration with systems like ServiceNow or Jira.
  • Background in financial services or other regulated, large-scale operating models; comfort with data privacy, retention, and access controls.
  • Designs metrics and feedback loops to evaluate the impact of corrective actions/safety recommendations and reduce recurrence over time.
  • Certifications/education may include Lean/Six Sigma, SRE, reliability/safety or RCA-focused training, or equivalent practical credentials.
+ Show Original Job Post
























Lead Data Engineer - Data Modeling
Plano, Texas, United States
Engineering
About JPMorgan Chase
Global financial services firm providing investment banking, asset management, commercial banking, and consumer financial products worldwide.