MLOps Engineer - Unified Retail Media Platform

Remote
Apply
AI Summary

Design, implement, and maintain scalable ML model deployment pipelines. Build infrastructure to monitor model performance and data drift. Collaborate with data scientists to ensure models are production-ready.

Key Highlights
Support Unified Retail Media Platform
Design and implement ML model deployment pipelines
Collaborate with data scientists
Technical Skills Required
Python PySpark SQL Airflow AWS EMR Docker Git GitHub Actions
Benefits & Perks
Fully remote work
Flexible working hours (9 AM to 5 PM UK time)

Job Description


MLOps Engineer


📍 Location: Fully Remote (Turkey-based, aligned with UK working hours)

📄 Contract: Until end of February 2026 (initial, with high likelihood of extension)

🚀 Start Date: ASAP


🌍 About Us

Cressoft is a trusted IT consultancy firm in the UK, operating since 2011. We deliver impactful projects and provide top-tier tech talent to our clients. We’re proud to be the exclusive offshore talent supplier for a leading UK retail digital transformation programme — an exciting opportunity to join a team that values expertise and innovation.


💼 About the Client

You’ll be working with one of the UK’s largest and most respected retail organisations, known for its diverse portfolio of brands across grocery, e-commerce, furniture, loyalty, and banking. Their technology division employs over 3,000 engineers and operates like a startup — fast-paced, innovative, and focused on in-house development.


MLOps Engineer

We are seeking a ML OPS ENGINEER to play a key role in a groundbreaking Unified Retail Media Platform built for one of the UK’s largest retail enterprises You’ll be part of MLOps Engineering squads working on Ad-tech, interpreting and following architectural and engineering principles, operating frameworks, and new and improved technology and solutions. With your technical craft, curiosity and experimentation, you’ll use judgement to apply specific techniques to deliver focused outcomes that support our customers.


What you'll do

As an MLOps Engineer, you will support these products from inception. This requires working across the full data ecosystem: developing application-specific data pipelines (features), building CICD pipelines that automate the training and deployment of machine learning models, publishing the model results for downstream consumption, and/or building out the APIs that serve model outputs to downstream systems on-demand.


Key Responsibilities

  • Design, implement, and maintain scalable ML model deployment pipelines (CI/CD for ML).
  • Build infrastructure to monitor model performance, data drift, and other key metrics in production.
  • Develop and maintain tools for model versioning, reproducibility, and experiment tracking.
  • Optimize model serving infrastructure for latency, scalability, and cost.
  • Automate the end-to-end ML workflow, from data ingestion to model training, testing, deployment, and monitoring.
  • Collaborate with data scientists to ensure that models are production-ready.
  • Implement security, compliance, and governance practices for machine learning systems.
  • Support troubleshooting and incident response for deployed ML systems.
  • Model training, model versioning, feature store management


Qualifications


Required:

Tech Stack Requirement Summary: Python, PySpark, SQL, Airflow, AWS and EMR,

  • Strong programming skills in Python; experience with ML libraries such as PySpark, SQL
  • Experience with containerization tools like Docker and orchestration tools like Airflow
  • EMR experience
  • Familiarity with cloud platforms (AWS) and ML services (e.g., SageMaker, Vertex AI).
  • Strong software development experience(unit testing, code readability and modularisation, git)
  • Experience with CI/CD pipelines and automation tools like GitHub Actions.


Preferred:

  • Grafana, New Relic
  • Prior experience deploying ML models in production environments.
  • Knowledge of infrastructure-as-code tools like Terraform or CloudFormation
  • Familiarity with model interpretability and responsible AI practices.
  • Experience with feature stores and model registries.


💬 Work Arrangement

  • Fully Remote – Work hours: 9 AM to 5 PM UK time for seamless collaboration.
  • Communication Tools: Microsoft Teams and Slack.


Subscribe our newsletter

New Things Will Always Update Regularly