Senior GPU Supercomputing Engineer
Visa Sponsorship
Relocation
AI Summary
Design, build, and operate GPU supercomputing environments for large-scale training and inference. Develop software for cluster management and present a unified interface for training and inference. Partner with researchers to unblock scale runs and advise on parallelism and performance trade-offs.
Key Highlights
Operate and automate large GPU clusters
Write software for cluster management and unified interface
Extend scheduling/orchestration for topology-aware placement
Monitor and improve operational metrics
Partner with researchers for scale runs and performance trade-offs
Technical Skills Required
Benefits & Perks
Annual salary range: $350,000 - $475,000
Visa sponsorship
Generous health, dental, and vision benefits
Unlimited PTO
Paid parental leave
Relocation support
Job Description
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders whove created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
About The Role
Were looking for an engineer to design, build, and operate the GPU supercomputing environment that powers large-scale training and inference. You will deliver high-performant, reliable, and cost-efficient compute so our users and researchers can move fast at scale.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
What Youll Do
- Operate and automate large GPU clusters including provisioning, imaging, and capacity planning.
- Write software that abstracts cluster management and presents a unified interface for training and inference.
- Extend scheduling/orchestration (Kubernetes, Slurm, or similar) for topology-aware placement, preemption, quotas, and fair-share multi-tenancy.
- Monitor and improve operational metrics of speed, reliability, and error recovery.
- Build reliable storage and artifact paths for datasets, checkpoints, and logs with clear retention and lineage.
- Partner with researchers to unblock scale runs and advise on parallelism and performance trade-offs.
Minimum qualifications:
- Bachelors degree or equivalent experience in computer science, engineering, or similar.
- Proficiency in at least one backend language (we use Python or Rust).
- Experience operating large-scale clusters and container orchestration systems (e.g. Kubernetes or Slurm).
- Comfort operating across the stack and owning projects end-to-end.
- Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
- A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.
- Strong systems background: Linux, networking, and infrastructure-as-code.
- Familiarity with CUDA/NCCL and performance profiling for distributed training/inference.
- Prior work supporting large-scale model training or inference environments.
- Understanding of deep learning frameworks (e.g., PyTorch, TensorFlow, JAX) and their underlying system architectures.
- Track record of working in fast-paced environments balancing care with urgency.
- Location: This role is based in San Francisco, California.
- Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
- Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
- Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.