View All Jobs 154644

Founding Data Engineer

Build a complete, real-time academic paper ingestion and deduplication system
Oakland, California, United States
Senior
$185,000 – 305,000 USD / year
18 hours agoBe an early applicant
Elicit

Elicit

Elicit is a company that leverages AI to enhance decision-making and business intelligence.

Elicit Data Engineer Role

Elicit is an AI research assistant that uses language models to help professional researchers and high-stakes decision makers break down hard questions, gather evidence from scientific/academic sources, and reason through uncertainty.

Two main reasons for this role:

  1. Currently, Elicit operates over academic papers and clinical trials. One of your key initial responsibilities will be to build a complete corpus of these documents, available as soon as they're published, combining different data sources and ingestion methods. Once that's done there is a growing list of other document types and sources we'd love to integrate!
  2. One of our main initiatives is to broaden the sorts of tasks you can complete in Elicit. We need a data engineer to figure out the best way to ingest massive amounts of heterogeneous data in such a way as to make it usable by LLMs. We need your help to integrate into our customers' custom data providers so that they can create task-specific workflows over them.

In general, we're looking for someone who can architect and implement robust, scalable solutions to handle our growing data needs while maintaining high performance and data quality.

Our tech stack:

  • Data pipeline: Python, Flyte, Spark
  • Backend: Node and Python, event sourcing
  • Frontend: Next.js, TypeScript, and Tailwind
  • We like static type checking in Python and TypeScript!
  • All infrastructure runs in Kubernetes across a couple of clouds
  • We use GitHub for code reviews and CI
  • We deploy using the gitops pattern (i.e. deploys are defined and tracked by diffs in our k8s manifests)

Consider the questions:

  • How would you optimize a Spark job that's processing a large amount of data but running slowly?
  • What are the differences between RDD, DataFrame, and Dataset in Spark? When would you use each?
  • How does data partitioning work in distributed systems, and why is it important?
  • How would you implement a data pipeline to handle regular updates from multiple academic paper sources, ensuring efficient deduplication?

If you have a solid answer for these—without reference to documentation—then we should chat!

Location and travel:

We have a lovely office in Oakland, CA; there are people there every day but we don't all work from there all the time. It's important to us to spend time with our teammates, however, so we ask that all Elicians spend about 1 week out of every 6 with teammates.

We wrote up more details on this page.

What you'll bring to the role:

  • 5+ years of experience as a data engineer: owning make-or-break decisions about how to ingest, manage, and use data
  • Strong proficiency in Python (5+ years experience)
  • You have created and owned a data platform at rapidly-growing startups—gathering needs from colleagues, planning an architecture, deploying the infrastructure, and implementing the tooling
  • Experience with architecting and optimizing large data pipelines, ideally with particular experience with Spark; ideally these are pipelines which directly support user-facing features (rather than internal BI, for example)
  • Strong SQL skills, including understanding of aggregation functions, window functions, UDFs, self-joins, partitioning, and clustering approaches
  • Experience with columnar data storage formats like Parquet
  • Strong opinions, weakly-held about approaches to data quality management
  • Creative and user-centric problem-solving
  • You should be excited to play a key role in shipping new features to users—not just building out a data platform!

Nice to have:

  • Experience in developing deduplication processes for large datasets
  • Hands-on experience with full-text extraction and processing from various document formats (PDF, HTML, XML, etc.)
  • Familiarity with machine learning concepts and their application in search technologies
  • Experience with distributed computing frameworks beyond Spark (e.g., Dask, Ray)
  • Experience in science and academia: familiarity with academic publications, and the ability to accurately model the needs of our users
  • Hands-on experience with industry standard tools like Airflow, DBT, or Hadoop
  • Hands-on experience with standard paradigms like data lake, data warehouse, or lakehouse

What you'll do:

  • Building and optimizing our academic research paper pipeline
  • Expanding the datasets Elicit works over
  • Data for our ML systems

Your first week:

  • Start building foundational context
  • Make your first contribution to Elicit

Your first month:

  • You'll complete your first multi-issue project
  • You're actively improving the team

Your first quarter:

  • You're flying solo
  • You've developed an area of expertise
  • You actively research and improve the product

Compensation, benefits, and perks:

  • Flexible work environment: work from our office in Oakland or remotely with time zone overlap (between GMT and GMT-8), as long as you can travel for in-person retreats and coworking events
  • Fully covered health, dental, vision, and life insurance for you, generous coverage for the rest of your family
  • Flexible vacation policy, with a minimum recommendation of 20 days/year + company holidays
  • 401K with a 6% employer match
  • A new Mac + $1,000 budget to set up your workstation or home office in your first year, then $500 every year thereafter
  • $1,000 quarterly AI Experimentation & Learning budget, so you can freely experiment with new AI tools to incorporate into your workflow, take courses, purchase educational resources, or attend AI-focused conferences and events
  • A team administrative assistant who can help you with personal and work tasks

For all roles at Elicit, we use a data-backed compensation framework to keep salaries market-competitive, equitable, and simple to understand. For this role, we target starting ranges of:

  • Senior (L4): $185-270k + equity
  • Expert (L5): $215-305k + equity
  • Principal (L6): >$260 + significant equity

We're optimizing for a hire who can contribute at a L4/senior-level or above.

We also offer above-market equity for all roles at Elicit, as well as employee-friendly equity terms (10-year exercise periods).

+ Show Original Job Post
























Founding Data Engineer
Oakland, California, United States
$185,000 – 305,000 USD / year
Engineering
About Elicit
Elicit is a company that leverages AI to enhance decision-making and business intelligence.