Job Description
Excellent data analysis and exploration using T-SQL
Proficiency in Python for ETL development and data wrangling, especially in Databricks.
Experience writing automated tests for data pipelines
Strong SQL programming (stored procedures, functions)
Knowledge and experience of data warehouse modelling methodologies (Kimball, dimensional modelling, Data Vault 2.0)
Experience in Azure – one or more of the following: Data Factory, Databricks, Synapse Analytics, ADLS Gen2
Experience in building robust and performant ETL processes
Azure Data products and MS Fabric
Experience in using source control & ADO
Awareness of data governance tools and practices (e.g., Azure Purview)
Understanding and experience of deployment pipelines
Excellent analytical and problem-solving skills, with the ability to think critically and strategically.
Strong communication and interpersonal skills, with the ability to engage and influence stakeholders at all levels.
To always act with integrity and embrace the philosophy of treating our customers fairly
Analytical, ability to arrive at solutions that fit current / future business processes
Effective writing and verbal communication
Organisational skills: Ability to effectively manage and co-ordinate themselves.
Ownership and self-motivation
Delivery focus
Assertive, resilient and persistent
Team oriented
Deal well with pressure and highly effective at multi-tasking and juggling priorities