Poolside is seeking a Pretraining Data Engineer to build and scale our Model Factory, responsible for architecting high-performance pipelines and delivering diverse datasets for pre-training foundation models. The ideal candidate will have a strong background in building production-grade, distributed data systems for machine learning. This is a hands-on role that requires expertise in data modeling, algorithmic sorting, and distributed pipeline optimization.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
About Poolside
In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
Poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.
About Our Team
We are a remote-first team that sits across Europe and North America. We come together once a month in-person for 3 days, always Monday-Wednesday, with an open invitation to stay the whole week. We also do longer off-sites once a year.
Our team is a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.
About The Role
You will be a core member of our Pretraining Data team, responsible for building and scaling our Model Factory: our system for quickly training, scaling, and experimenting with our foundation models. This is a hands-on role where your #1 mission is to architect and maintain the high-performance pipelines that transform trillions of raw tokens into the high-quality dataset "fuel" our models require.
To enable us to conduct and implement latest research, you’ll be engineering the ingestion, deduplication, and streaming systems that handle petabyte-scale data. You will bridge the gap between raw web crawls and our GPU clusters, directly influencing model performance through superior data modeling, algorithmic sorting, and distributed pipeline optimization. You will be closely collaborating with other teams like Pretraining, Postraining, Evals, and Product to generate high-quality datasets that map to missing model capabilities and downstream use cases.
YOUR MISSION
To deliver large, high-quality, and diverse datasets of natural language and source code for training poolside models and coding agents.
Responsibilities
- Build and maintain high-performance pipelines for trillions of tokens.
- Deliver diverse and high quality datasets for pre-training foundation models.
- Closely work with other teams such as Pretraining, Posttraining, Evals and Product to to ensure alignment on the quality of the models delivered.
Interested in remote work opportunities in Devops? Discover Devops Remote Jobs featuring exclusive positions from top companies that offer flexible work arrangements.
- Strong background in building production-grade, distributed data systems for machine learning, with experience in:
- Orchestration: Slurm, Airflow, or Dagster
- Observability & Reliability: CI/CD, Grafana, Prometheus, etc.
- Infra: Git, Docker, k8s, cloud managed services
- Batched inference (ex: vLLM)
- Performance obsession, especially with large-scale GPU clusters and distributed pipelines
- Expert-level python knowledge and ability to write clean and maintainable code
- Strong algorithmic foundations
- Proficiency with libraries like Polars, Dask, or PySpark
- Nice to have:
- Experience in building trillion-scale SOTA pretraining datasets
- Experience translating research to production at scale
- Experience with OCR, web crawling, or evals
- Prior experience pre-training LLMs
Browse our curated collection of remote jobs across all categories and industries, featuring positions from top companies worldwide.
- Intro call with Eiso, our CTO & Co-Founder
- Technical Interview(s) with one of our Founding Engineers
- Team fit call with the People team
- Final interview with one of our Founding Engineers
- Fully remote work & flexible hours
- 37 days/year of vacation & holidays
- Health insurance allowance for you and dependents
- Company-provided equipment
- Wellbeing, always-be-learning and home office allowances
- Frequent team get togethers
- Great diverse & inclusive people-first culture
Similar Jobs
Explore other opportunities that match your interests
company watch
Senior FinOps Engineer (Azure Cloud Cost Optimization)
Maxwell Bond