View All Jobs 126878

DE&A - Core - Cloud Data Engineering - Snowflake Data Engineering

Build and publish cross-product data products and semantic layer for self-service analytics
Hyderābād, Telangāna, IndiaPune, Mahārāshtra, India
Senior
15 hours agoBe an early applicant
Zensar Technologies

Zensar Technologies

Provides digital transformation, cloud, data engineering, and IT services to enterprises across industries, leveraging innovation and agile delivery models.

13 Similar Jobs at Zensar Technologies

Senior Analytics Engineer (Data Products)

Seniority: Senior | Experience: 6 to 10 years | Contract Engagement: Contractor (individual or via agency) | Expected duration: End of 2026 (potential to extend) | Start: May 2026 | Location: Remote | Time zone overlap: EST, IST, and UK as noted below

Contractor Role Details

  • Engagement model: You will partner with the SVP of Data & Product Analytics (business owner), who will set priorities and accept deliverables. You will not have people-management responsibilities.
  • Primary outputs: Cross-product aggregated datasets, a semantic layer people use (metrics plus curated datasets), and clear dataset documentation and definitions. You will partner with the Data Engineer to ship and operate the silver and gold tables, and to make incremental improvements to Unity Catalog (organization, naming, and access).
  • Ways of working: Async-first, with written specs, review cycles, and lightweight live syncs during overlap hours.
  • Access and tooling: Work will be performed in company-managed systems (for example, Databricks and documentation tools) subject to required onboarding and access approvals.

About the Team

You will join the Global Analytics Platforms pillar at Guy Carpenter, a leading reinsurance business within Marsh. The Data and Product Analytics team sits under the SVP of Data & Product Analytics and partners with actuarial users, product owners, and platform engineers in the UK and US. Our Databricks-based platform powers MetaRisk Online insights and supports other analytics and product experiences across the organization. In 2026, we are focused on building shared datasets and a semantic layer that make data reliable, self-service, and reusable across products, including automation and AI use cases.

What You'll Do

  • Partner on Unity Catalog operations: Work with the Data Engineer to keep catalogs and schemas organized, tighten naming conventions, and implement permission patterns that match how actuarial users work. When access or discoverability is broken, you help fix it and document the pattern.
  • Partner to deliver the silver and gold layer: Work with the Data Engineer to design transformation logic, define table and metric definitions, review outputs, and validate results with actuarial users. You will contribute directly (SQL, notebooks, and documentation), but the Data Engineer is the primary owner for production pipelines and releases.
  • Build cross-product aggregated datasets: Implement canonical datasets that join and roll up measures and dimensions across products and lines of business. Optimize for consistent definitions, good performance, and wide reuse.
  • Operate datasets like products: Version key tables, set clear expectations (freshness, completeness, schema stability), and communicate changes before they break downstream workflows. Make it easy for other teams and products to depend on your data.
  • Build and publish the semantic layer: Implement metric definitions, a business glossary, and curated datasets, then publish examples so users can self-serve in Databricks. Iterate based on questions you get and what people actually use.
  • Partner on data contracts and quality checks: Work with the Data Engineer on contracts, schema checks, and lineage so downstream workflows can trust the data. You help define what "good" looks like, add documentation, and support triage when something breaks.
  • Support self-service and answer questions: Publish examples and lightweight documentation so users can query curated data safely (Databricks Genie and Databricks SQL). You are not expected to onboard users one-by-one, but you will answer questions, unblock teams, and incorporate feedback into the datasets.
  • Keep documentation in sync with production: Maintain dataset definitions, column-level documentation, and metric standards as the tables evolve. The goal is a source of truth people actually trust and use.

What We're Looking For

Required

  • 6 to 10 years in data engineering, analytics engineering, or a closely related role.
  • Databricks experience, including hands-on work with Unity Catalog, Delta Lake, and SQL or notebook-based development.
  • Python and SQL at an engineering level. You write production-quality transformation code, not just ad hoc queries.
  • Solid understanding of medallion architecture (bronze to silver to gold) and when to use each layer.
  • Experience building and supporting semantic layers, data catalogs, or self-service data products in production.
  • Track record building shared, cross-domain datasets that are used by multiple teams, not just a single reporting use case.
  • Strong stakeholder management. You can align definitions across product, actuarial, and engineering partners and make practical tradeoffs when requirements conflict.
  • Comfortable with modern engineering workflow: Git-based version control, code review, and basic testing or validation before release.
  • Strong written communication. You will write requirements documents and technical specs, but you are also expected to build and ship the work.

Nice to Have

  • Experience with Databricks Genie or AI-BI features.
  • Familiarity with MCP (Model Context Protocol), LLM tool calling, or AI agent patterns.
  • Background in financial services, insurance, or reinsurance data.

How You'll Work

You will partner closely with the SVP of Data & Product Analytics (business owner) in an async-first model. Strong written communication, proactive flagging of blockers, and ownership of your workstream are essential. You will collaborate with a mid-level Data Analyst and work with UK and US actuarial users and platform engineers to gather requirements and validate outputs. This is not an execution-only engagement. You are expected to propose and document decisions about data structure, naming, and access patterns, with final alignment and sign-off by the business owner.

Success in the first 60 to 90 days looks like:

  • Unity Catalog navigation and access are improved for the highest-value areas (schemas organized, naming consistent, access working as expected), in partnership with the Data Engineer.
  • First set of silver and gold tables is in production, shipped in partnership with the Data Engineer, validated with actuarial users, and being used on a regular basis.
  • Semantic layer is live with core metrics defined, curated datasets published, and example queries in place for self-service.
  • Data contracts and basic validation checks are running for new or changed datasets, implemented in partnership with the Data Engineer, and failures are visible before users find them.
  • At least one cross-product dataset is shipped and used by two or more downstream consumers (for example, a product feature plus a reporting workflow).

Hours: The team runs a data workstream sync at 8:45am EST (7:15pm IST) on Monday, Wednesday, and Friday. Availability through approximately 10:00 to 10:30am EST (8:00 to 8:30pm IST) on those days is preferred. Outside of that overlap window, work is self-directed and async.

+ Show Original Job Post
























DE&A - Core - Cloud Data Engineering - Snowflake Data Engineering
Hyderābād, Telangāna, India
Engineering
About Zensar Technologies
Provides digital transformation, cloud, data engineering, and IT services to enterprises across industries, leveraging innovation and agile delivery models.