Senior MLOps Engineer

Visa Sponsorship Relocation
Apply
AI Summary

Multiverse Computing seeks a Senior MLOps Engineer to lead the technical vision of our Training and Inference Optimization team. This high-impact role will architect the infrastructure that powers our next-generation AI models. The ideal candidate will bridge the gap between systems programming and machine learning, optimizing large-scale LLM training and building ultra-high-throughput serving systems.

Key Highlights
Architect and maintain scalable distributed training pipelines
Lead the deployment of LLMs using vLLM, TensorRT-LLM, or SGLang
Utilize SLURM/Flyte/Ray/SkyPilot to manage and scale ML workloads
Key Responsibilities
Training Infrastructure
Inference Orchestration
Workload Orchestration
Lifecycle Management
Performance Engineering
Efficiency & Cost Governance
Technical Leadership
Technical Skills Required
PyTorch NVIDIA NeMo CUDA NCCL Triton Kubernetes K8s operators Python C++ Rust
Benefits & Perks
Indefinite contract
Equal pay guaranteed
Variable performance bonus
Signing bonus
Work visa sponsorship
Relocation package
Private health insurance
Eligibility for educational budget
Hybrid opportunity
Flexible working hours
Language classes
Discounted lunch options
Nice to Have
Active contributions to relevant open-source projects
Proven track record with model compression
Experience writing or optimizing custom Triton kernels
Expertise in ML observability stacks

Job Description


Multiverse Computing

Multiverse is a well-funded, fast-growing deep-tech company founded in 2019. We are the largest quantum software company in the EU and have been recognized by CB Insights (2023 and 2025) as one of the 100 most promising AI companies in the world.

With 180+ employees and growing, our team is fully multicultural and international. We deliver hyper-efficient software for companies seeking a competitive edge through quantum computing and artificial intelligence.

Our flagship products, CompactifAI and Singularity, address critical needs across various industries:


  • CompactifAI is a groundbreaking compression tool for foundational AI models based on Tensor Networks. It enables the compression of large AI systems—such as language models—to make them significantly more efficient and portable.
  • Singularity is a quantum- and quantum-inspired optimization platform used by blue-chip companies to solve complex problems in finance, energy, manufacturing, and beyond. It integrates seamlessly with existing systems and delivers immediate performance gains on classical and quantum hardware.


You’ll be working alongside world-leading experts to develop solutions that tackle real-world challenges. We’re looking for passionate individuals eager to grow in an ethics-driven environment that values sustainability and diversity.

We’re committed to building a truly inclusive culture—come and join us.

About The Role

We are seeking a Senior MLOps Engineer to steer the technical vision of our Training and Inference Optimization team. In this high-impact role, you will architect the infrastructure that powers our next-generation AI models. You will bridge the gap between systems programming and machine learning, optimizing large-scale LLM training via NVIDIA NeMo and building ultra-high-throughput serving systems using vLLM, TensorRT-LLM, and SGLang.

Your mission is to ensure our models are not only state-of-the-art but also production-hardened, cost-efficient, and performant at scale.

Key Responsibilities


  • Training Infrastructure: Architect and maintain scalable distributed training pipelines using NVIDIA NeMo/Nemotron/Megatron-Bridge. You will optimize GPU utilization, manage complex checkpointing strategies, and implement automated fault tolerance for long-running jobs.
  • Inference Orchestration: Lead the deployment of LLMs using vLLM, TensorRT-LLM, or SGLang. You will implement and tune cutting-edge techniques - including PagedAttention, continuous batching, and advanced quantization (AWQ/FP8) to maximize throughput and minimize TPOT (Time Per Output Token).
  • Workload Orchestration: Utilize SLURM/Flyte/Ray/SkyPilot to manage and scale ML workloads across diverse cloud providers and on-prem clusters, ensuring seamless resource shifting and cost-effective execution.
  • Lifecycle Management: Standardize model tracking, versioning, and transition workflows using MLflow (or similar tool), ensuring reproducible training runs and a clear path from research to production.
  • Performance Engineering: Conduct deep-dive profiling and bottleneck analysis across the full stack - from CUDA kernels and NCCL collective communications to Python-level orchestration.
  • Efficiency & Cost Governance: Monitor and optimize cloud and on-prem GPU expenditures through intelligent scaling policies and high-density resource packing.
  • Technical Leadership: Set the bar for engineering excellence. You will drive the roadmap, perform rigorous code reviews, and mentor junior and mid-level engineers.


Required Qualifications


  • Experience: 5+ years in MLOps, DevOps, or Software Engineering, with a minimum of 2 years dedicated to LLM infrastructure.
  • Deep Learning Ecosystem: Expert-level proficiency with PyTorch and the NVIDIA stack (CUDA, NCCL, Triton).
  • Specialized Tooling: Hands-on experience with NVIDIA NeMo (or Megatron-Bridge) for distributed training and at least two of the following for serving: vLLM, TensorRT-LLM, or SGLang.
  • Orchestration & Lifecycle: Proven experience with SLURM/Flyte/Ray/SkyPilot for cluster management and MLflow (or similar tool) for experiment and model management.
  • Infrastructure: Deep expertise in Kubernetes and K8s operators (e.g., KubeRay, MPI Operator, or Run:ai).
  • Systems Programming: Mastery of Python and a functional understanding of C++ or Rust for performance-critical components.
  • Next-Gen Hardware: Familiarity with high-performance networking (InfiniBand/RoCE) and NVIDIA H200/B200 (Blackwell) architectures.


Preferred Skills


  • Active contributions to relevant open-source projects (vLLM, SGLang, SkyPilot, or NeMo).
  • Proven track record with model compression (Sparsity, Distillation, or Quantization).
  • Experience writing or optimizing custom Triton kernels.


Expertise in ML observability stacks (Prometheus, Grafana, Jaeger).

Perks & Benefits


  • Indefinite contract
  • Equal pay guaranteed.
  • Variable performance bonus.
  • Signing bonus.
  • We offer work visa sponsorship (If applicable).
  • Relocation package (if applicable).
  • Private health insurance.
  • Eligibility for educational budget according to internal policy.
  • Hybrid opportunity.
  • Flexible working hours.
  • Language classes and discounted lunch options
  • Working in a high paced environment, working on cutting edge technologies.
  • Career plan. Opportunity to learn and teach.
  • Progressive Company. Happy people culture


As an equal opportunity employer, Multiverse Computing is committed to building an inclusive workplace. The company welcomes people from all different backgrounds, including age, citizenship, ethnic and racial origins, gender identities, individuals with disabilities, marital status, religions and ideologies, and sexual orientations to apply.

Come and join our multicultural team!

5 locations

+27 languages

Similar Jobs

Explore other opportunities that match your interests

Global Strategic Lead

Programming
•
1d ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Fever

Spain

Junior Release Manager

Programming
•
2d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Executive

Talan

Spain
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

GMV

Spain

Subscribe our newsletter

New Things Will Always Update Regularly