Develop and manage ML pipelines on Databricks, ensuring scalability and reliability. Collaborate with data science and engineering teams. Work on large-scale production settings.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
MLOps Engineer — Databricks
Client: A large global enterprise (name not disclosed)
Location: India
Work Model: 100% Remote
Contract: 6 months (initial) with possibility of extension
Start Date: ASAP
Engagement: Full-time / Long-term contract
Role Overview
You will support end-to-end ML lifecycle management on Databricks, including developing pipelines, orchestrating ML workflows, and operationalizing data science models in an enterprise environment.
Key Responsibilities
- Develop ML pipelines using Databricks Jobs, Repos, Workflows
- Manage and automate ML lifecycle with MLflow (tracking, registry, deployments)
- Build scalable feature pipelines using Spark and Delta Lake
- Implement CI/CD pipelines for Databricks environments
- Monitor production ML models and ensure reliability & performance
- Optimize Databricks compute usage and cluster costs
- Collaborate with data science, data engineering, and cloud teams
Required Experience
- 4–7 years in MLOps, Data Engineering, or ML Engineering
- Strong experience with Databricks in large-scale production settings
- Proficiency with MLflow
- Hands-on with Spark (Python or Scala)
- CI/CD experience (GitHub Actions, Azure DevOps, Jenkins, etc.)
- Familiarity with cloud platforms (AWS/Azure)
Nice to Have
- Experience with Databricks Feature Store
- Experience working in highly regulated and large global enterprises
- Knowledge of Airflow or orchestration tools