View All Jobs 154032

Mid - Senior Data Engineer - Remote Eligible

Design and implement scalable data pipelines for Latin American financial datasets
Sao Paulo
Senior
13 hours agoBe an early applicant
Belvo

Belvo

A fintech platform providing an API for accessing and interpreting financial data across Latin American markets.

Senior Data Engineer

We are Belvo, an open finance API platform with the bold vision of democratizing access to financial services in Latin America. We enable any financial innovator to access and interpret financial data, as well as initiate payments from their end-users accounts. We're turning the messy complexities of the Latin American financial ecosystem into a modern set of tools to access and interpret data and move money in a seamless and secure way.

We're a highly-technical, passionate, and driven team. We are more than 90 people and our team currently represents 20 nationalities. We have offices in São Paulo and Mexico City – while a large portion of us work remotely.

We are tackling a very stimulating problem: connecting fintech innovators with legacy financial infrastructure. We strive to go beyond the limits of what is possible today and to do so in an elegant and developer-first way.

Since starting our adventure in May 2019, we have raised $71m from the leading VC investors globally.

We're looking for a seasoned Senior Data Engineer to join our Data Platform team. This is a team that has the goal to support data understanding at scale, by architecting and developing the infrastructure to develop data pipelines, moving and transforming complex datasets coming from different sources, and improving data discoverability and data literacy. The ideal candidate is a player who is sought out for technical guidance and can act as an owner for projects across the company. Ideally has been building data infrastructure and is familiar with Data Mesh concepts.

As part of the team, you will be in contact with our stakeholders, ranging from data insights teams of analysts to deeply technical backend product teams to better define and develop the platform, and you will be a central part of the roadmap definition for the team. You will have full ownership of some projects and will have the opportunity to define new data platform products. The current platform uses the latest technologies, like EMR Studio and Apache Iceberg, and as part of the team, you will also be responsible for maintaining and evolving it.

Our platform infrastructure is fully defined with Terraform and we are processing over a thousand events per second. We run daily processes that read over 40 terabytes of data using dbt over Athena and Spark on EMR clusters, everything orchestrated with Dagster. We are moving some of our processes to Streaming processing using Kinesis and Flink.

This position may be for you if:

  • You have at least 2 years of experience with data engineering platforms on the AWS cloud
  • You're fluent in English
  • You're familiar with orchestrators like Dagster or Airflow
  • You have previous experience with dbt
  • You enjoy a good challenge dealing with billions of events
  • You have experience integrating third party APIs
  • You love to focus on getting things done

Amazing if:

  • You have experience in data engineering, building infrastructure as code in the cloud for a data platform with Terraform.
  • You have experience with DBT and Great Expectations
  • You have experience with Spark, either with Scala or Python
  • You have experience in some of this AWS tools: EMR, DMS, Glue, Kinesis, Redshift
  • You have experience with data catalogs and data lineage

Our tech stack:

  • We're building our platform using modern technologies, putting the focus on reliability and long-term maintainability
  • We primarily use Python on the backend. We use battle-tested technologies such as Django, and we are heavy users of Python's asyncio for some parts of our stack
  • We use Javascript, Vue.js and Sass on the frontend. We are developing and leveraging our own design system and component library
  • We run our infrastructure on top of Amazon Web Services, leveraging managed services. We tend to favor the use of managed services wherever possible, in order to focus on our business problems
  • We observe and monitor our services using Datadog
  • We follow Continuous Integration and Continuous Delivery best practices

Our process steps:

  • People team chat
  • Take-home challenge
  • Challenge presentation
  • Meet the founders

Our perks:

  • Stock options
  • Annual company bonus linked to company performance
  • Flexible working hours
  • Remote friendly
  • Pet friendly
  • Health Insurance
  • Paid time off on your birthday
  • Renew your laptop every 3 years
  • Training Budget
  • Team building events
  • Bank holidays swap inside the same month
  • Fitness/ wellness stipends
  • Yearly offsite
  • Fresh fruit every week, all-you-can-drink tea and coffee
  • Extra days off when completing company anniversary
  • Yearly Company offsite
  • Yearly Department offsite
+ Show Original Job Post
























Mid - Senior Data Engineer - Remote Eligible
Sao Paulo
Engineering
About Belvo
A fintech platform providing an API for accessing and interpreting financial data across Latin American markets.