Principal Engineer - Data and AI
You'll join the Data team to drive innovation and deliver impactful solutions that power our business. As a Principal Data Engineer, you'll move beyond incremental improvements to lead foundational projects. You will provide hands-on leadership to shape the future of Pleo's data architecture.
While embedded in the Data Platform Group, you'll work directly with the VP of Data, contributing as a key member of the data leadership team. You'll collaborate extensively with Product, Engineering and Business teams to ensure our data solutions directly drive product innovation and business outcomes. You'll help shape the future of Pleo's data landscape, supporting multiple teams, driving strategic projects, and staying hands-on with cutting-edge technology to deliver impactful solutions.
To put things in context, we've listed below examples of previous or upcoming projects we expect Principal Data Engineers to work on so you can get a better idea of what you might be doing:
- Data catalog and data literacy: Spearhead the implementation of a new data catalog to empower the entire organization with accessible, high-quality data.
- Data platform & warehousing strategy: Lead the revamp of our data infrastructure and our dbt architecture to ensure it can scale for the future.
- AI engineering strategy: Support our Product & Engineering teams in structuring and building new AI platform tools for internal & external usage.
- Community of Practice: Foster and grow our internal data community, establishing best practices and mentoring technical peers across the company. This could include upskilling the team on engineering best practices, coaching colleagues on complex data pipelines or jumping in to debug things where needed.
In addition to this, you can expect to work with the following tech stack at Pleo:
- We're working with both AWS and GCP as well as Kubernetes as we have a hybrid infrastructure.
- We of course work with Python and SQL alongside airflow for orchestration and kafka for event-driven processing.
- Our data warehouse is currently in BigQuery, and serving dbt and Looker.
We're looking for a strategic thinker who can translate complex business challenges into scalable data solutions. You will thrive in this role if you have:
- High proficiency in Python with a proven track record of delivering high-quality data pipelines and products at scale.
- Experience in designing, building, maintaining and scaling data platforms with experience in both the AWS and GCP ecosystems.
- Experience in building data products for analytics purposes, with a focus on AI/ML applications.
- Expertise in designing systems and data architecture serving platform, product and analytics purposes.
- The ability to inspire and act as a trusted mentor who builds strong partnerships.
- The ability to innovate with pragmatism, and you actively contribute to a culture of continuous learning by seeking and sharing knowledge.
This role is for you if:
- You are a passionate technologist who loves balancing high-level strategy with hands-on execution. You enjoy coding and ensure you still spend some of your time debugging, managing or supervising complex data pipelines.
- You are excited about advancements in AI and data engineering best practices.
- You want to own and shape Pleo's data architecture (not just one piece of it), and help deliver it.
- You excel at mentoring and empowering other data professionals, helping them grow their careers.
- You have significant experience working in data platform teams with a proven track record in implementing and fostering DataOps practices.
This role may not be for you if:
- You want to focus on architecture and strategy only. This is part of the role, but not all of it - we expect you to be hands-on too when needed.
- You prefer working on a single project for a long time rather than dipping in and out of various strategic initiatives.
- You lack experience in building complex data pipelines or designing data models at organisation level.
- You are eager to focus on big data ecosystems (we don't operate with very large volumes of data).
- Your experience is focused around consulting projects instead of long-term platforms.
The annual salary for this position varies based on your location:
- United Kingdom: £136,595 - £142,120
- Spain / Portugal: €156,000 - €172,000
Please note we are unable to offer visa sponsorship for this role in any of the listed locations you find in the job info so you will need to have a valid right to work. You will however be able to work either remotely, hybrid or in office.
We're happy to share more about our approach to pay and this range during your first call with us!
The package
- Your own Pleo card (no more out-of-pocket spending!)
- Lunch is on us for your work days - enjoy catered meals or receive a lunch allowance based on your local office
- Comprehensive private healthcare - depending on your location, coverage options include Vitality, Alan or Médis
- We offer 25 days of holiday + your public holidays
- For our Team, we offer both hybrid and fully remote working options
- Option to purchase 5 additional days of holiday through a salary sacrifice
- We use MyndUp to give our employees access to free mental health and well-being support with great success so far
- Paid parental leave - we want to make sure that we're supportive of families and help you feel that you don't have to compromise your family due to work
The Interview Process
- Intro call: A 30-minute chat with our Talent Partner to discuss the role and your background.
- Hiring Manager interview: a 60-minute discussion with the hiring manager to dive deeper into your experience and our data vision.
- A code review interview: a 60-minute live session where you'll review real data engineering code alongside our team.
- A system design interview: a 75-minutes live session where you'll work on a data architecture case alongside our team.
- A leadership interview: a 45-minutes discussion with a senior leader focusing on behavioural skills and values.
Transparency is important to us so we also wanted to share some insights about what we're looking for in applications to ensure you can set yourself up for success!
Last time we hired a Principal Data Engineer, we received a total of 452 applications but only 19 were selected for an intro call. Some of the key reasons why previous candidates didn't make it past the application screening stage include:
- CV writing and content: it was very clear that many of the CVs we saw were very generic and AI generated. There is no issue with leveraging AI to help with CV writing, there was little indication of what real impact the candidates had in their previous experience. You might have heard of the "Achieved X, as measured by Y, by doing Z" formula (credit Laszlo Bock ~2014), this is a great way to give a clear picture of what you have actually worked on.
- Application care: every single application we receive is reviewed by a human (yes, hundreds of them) because we believe that candidates' efforts should be matched by an equal level of human care. This means that we expect a similar level of attention put into your application. Read and answer the application questions carefully, they make a huge difference in our decision-making process.
- Profile to role fit: there was misunderstanding about the level of experience / seniority we expect from a principal level engineer. We received many applications from candidates who had not worked on organisation-wide or equivalent cross-division initiatives, had not been exposed to a data platform environment, or had clearly not been touching data pipelines code for years. We've taken great care in writing this role description to reflect the reality of the job as best as possible, please ensure you read it carefully and highlight on your CV the experience relevant to what we are looking for.