Distributed Systems Engineer - Secure Sandboxes & Infrastructure

Magic United State
Visa Sponsorship Relocation
Apply
AI Summary

Magic is advancing safe AGI by building scalable, high-performance distributed systems for AI research and deployment. The role involves designing isolated execution environments, managing large-scale compute resources, and ensuring system performance and reliability. The engineer collaborates across teams to optimize infrastructure supporting cutting-edge AI models.

Key Highlights
Focus on building scalable, high-performance distributed systems with strong isolation guarantees
Proficiency in low-level systems programming and infrastructure automation
Experience with cloud infrastructure, storage, networking, and container sandboxing
Technical Skills Required
C C++ Go Rust Kubernetes Terraform Cloud infrastructure (GCP, AWS, Azure)
Benefits & Perks
Salary range: $225,000 - $550,000 USD
Equity component
Comprehensive health, dental, and vision insurance
Unlimited paid time off
Visa sponsorship & relocation support

Job Description


Distributed Systems Engineer: Secure Sandboxes

Location: San Francisco, New York City, Seattle (or remote US)

Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.

About The Role

As a Software Engineer on the Supercomputing Platforms and Infrastructure team, you will build the next generation systems that power large scale AI research and deployment. You will focus on sandboxed execution environments, distributed systems orchestration, and performance optimized compute workflows. You will work closely with ML and Research teams and infrastructure teams to deliver both high throughput, scale, and strong isolation guarantees in a cluster environment.

What You Might Work On

  • Build highly scalable, highly performant, software that facilitates arbitrary code execution with strong isolation guarantees.
  • Design and build systems that allow our AI models to interface with machines in various modes, interactive terminal, GUI applications, etc.
  • Provision and operate high density compute and storage nodes (NVMe, high IOPS SSDs, high bandwidth networks), and build software that performs efficient load balancing, and resource utilization across them.
  • Instrument and optimize end to end performance including storage IO, network bandwidth, CPU, memory, and endurance constraints.
  • Develop APIs, self service platforms, and automation and tools so researchers and engineers can deploy and monitor workloads at scale.
  • Troubleshoot complex infrastructure issues across OS, drivers, hardware, storage systems (local NVMe, block storage, NFS), networking, namespace isolation, and cloud or hybrid environments.
  • Produce clean, documented code and developer workflows, and collaborate with SRE and security teams to ensure safe, reliable, and self serviceable compute offerings.

What We Are Looking For

  • Strong software engineering background (C, C++, Go, Rust, or similar systems languages).
  • Experience designing or operating sandboxed or isolated execution environments (namespaces, cgroups, container runtime internals), or strong interest in this area.
  • Experience building or operating distributed systems or parallel processing frameworks (scatter aggregate processing, worker pools, multi thread and multi process coordination, shared memory, atomics, merging strategies).
  • Solid understanding of storage and IO subsystems (NVMe, SSD endurance, write amplification), network performance, CPU and memory resource constraints in high performance compute clusters.
  • Comfortable working on low level systems (OS, threading, memory management, synchronization) as well as higher level orchestration or automation.
  • Experience with cloud infrastructure (GCP, AWS, Azure, etc.) including IaC tools such as OpenTofu, Terraform, Pulumi, or CDK is a plus.
  • Intellectual curiosity, strong ownership, and the ability to make tradeoffs in ambiguous environments such as latency versus throughput and isolation versus performance.

Nice to haves

  • Prior experience with GPU scheduling, RDMA networking, or bare metal HPC clusters
  • Contributions to open source container runtimes or sandboxing frameworks
  • Experience with kernel internals, device drivers, or SSD and NVMe endurance modeling
  • Familiarity with Rust for systems programming or Go for infrastructure orchestration

Why join us

  • You will work at the cutting edge of AI infrastructure including large compute clusters, advanced metrics engines, and next generation sandboxing systems for untrusted workloads.
  • The problems you solve will be foundational, for example how to securely and efficiently run arbitrary research code across thousands of GPUs or high end SSDs.
  • You will join a collaborative and hands-on team where you are building rather than only modeling.
  • Excellent compensation and equity, generous benefits, and high impact.

Our culture:

  • Integrity. Words and actions should be aligned
  • Hands-on. At Magic, everyone is building
  • Teamwork. We move as one team, not N individuals
  • Focus. Safely deploy AGI. Everything else is noise
  • Quality. Magic should feel like magic

Compensation And Benefits (US)

  • Annual salary range: 225,000 USD to 550,000 USD depending on seniority
  • Significant equity component
  • 401(k) with matching, comprehensive health, dental, and vision insurance, unlimited paid time off, visa sponsorship and relocation support
  • Fast paced, mission driven environment focused on safely advancing AGI for humanity


Subscribe our newsletter

New Things Will Always Update Regularly