Senior MLOps Engineer (Databricks)

Remote
Apply
AI Summary

Develop and manage ML pipelines on Databricks, ensuring scalability and reliability. Collaborate with data science and engineering teams. Work on large-scale production settings.

Key Highlights
Develop ML pipelines using Databricks Jobs, Repos, and Workflows
Manage and automate ML lifecycle with MLflow
Implement CI/CD pipelines for Databricks environments
Monitor production ML models and ensure reliability & performance
Optimize Databricks compute usage and cluster costs
Technical Skills Required
Databricks MLflow Spark Delta Lake Python Scala GitHub Actions Azure DevOps Jenkins AWS Azure
Benefits & Perks
100% remote work
6-month contract with possibility of extension
Full-time / Long-term contract

Job Description


MLOps Engineer — Databricks


Client: A large global enterprise (name not disclosed)

Location: India

Work Model: 100% Remote

Contract: 6 months (initial) with possibility of extension

Start Date: ASAP

Engagement: Full-time / Long-term contract


Role Overview

You will support end-to-end ML lifecycle management on Databricks, including developing pipelines, orchestrating ML workflows, and operationalizing data science models in an enterprise environment.


Key Responsibilities

  • Develop ML pipelines using Databricks Jobs, Repos, Workflows
  • Manage and automate ML lifecycle with MLflow (tracking, registry, deployments)
  • Build scalable feature pipelines using Spark and Delta Lake
  • Implement CI/CD pipelines for Databricks environments
  • Monitor production ML models and ensure reliability & performance
  • Optimize Databricks compute usage and cluster costs
  • Collaborate with data science, data engineering, and cloud teams


Required Experience

  • 4–7 years in MLOps, Data Engineering, or ML Engineering
  • Strong experience with Databricks in large-scale production settings
  • Proficiency with MLflow
  • Hands-on with Spark (Python or Scala)
  • CI/CD experience (GitHub Actions, Azure DevOps, Jenkins, etc.)
  • Familiarity with cloud platforms (AWS/Azure)


Nice to Have

  • Experience with Databricks Feature Store
  • Experience working in highly regulated and large global enterprises
  • Knowledge of Airflow or orchestration tools


Subscribe our newsletter

New Things Will Always Update Regularly