In this decade, the world will create artificial intelligence that reaches human level intelligence (and beyond) by combining learning and search. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will determine who survives and wins. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research and engineering at scale. They will create powerful economic engines. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this.
Poolside exists to be one of these companies - to build a world where AI will drive the majority of economically valuable work and scientific progress.
We believe that software development will be the first major capability in neural networks that reaches human-level intelligence because it's the domain where we can combine Search and Learning approaches the best.
At poolside we believe our applied research needs to culminate in products that are put in the hands of people. Today we focus on building for a developer-led increasingly AI-assisted world. We believe that current capabilities of AI lead to incredible tooling that can assist developers in their day to day work. We also believe that as we increase the capabilities of our models, we increasingly empower anyone in the world to be able to build software. We envision a future where not 100 million people can build software but 2 billion people can.
You would be working in our pre-training team focused on building out our distributed training and inference of Large Language Models (LLMs). This is a hands-on role that focuses on software development best practices, maintenance, and code architecture. You will have access to thousands of GPUs to verify changes.
Strong engineering skills are a prerequisite. We assume perfect knowledge of CI/CD, reliability concepts, software architecture, and code quality properties. A basic understanding of LLM training and inference principles is required. We look for fast learners who are prepared for a steep learning curve and are not afraid to step out of their comfort zone.
To help train the best foundational models for source code generation in the world