✨ About The Role
- The primary responsibility is to develop a high-quality, low-latency retrieval augmented generation (RAG) system.
- The role involves leading the implementation of custom embeddings and rankers to enhance search capabilities.
- The candidate will work with millions of complex documents, requiring strong analytical skills.
- Collaboration with a small, high-performing team is emphasized, aligning with the company's philosophy of in-person work.
- The position is based in San Francisco, indicating a need for local candidates who can work on-site.
⚡ Requirements
- The ideal candidate will have a strong background in Python and machine learning, particularly in areas such as embeddings, ranking, and recommendations.
- A minimum of 3 years of relevant experience is required, indicating a solid foundation in the field.
- Familiarity with large language models (LLMs) is essential, as the role involves developing advanced AI solutions.
- Experience with Spark and Databricks is a plus, showcasing the ability to work with big data technologies.
- The candidate should be comfortable working in a fast-paced environment with a focus on delivering high-quality results.