Full-Stack ML Engineer

eryk • Nigeria
Relocation
Apply
AI Summary

Full-stack ML Engineer role building AI systems end-to-end, from data to deployed product. Responsibilities span data pipelines, model development, backend/frontend, and MLOps across customer and end-client contexts. Requires 3-5 years of ML engineering experience, strong Python, practical LLM experience, and full-stack comfort.

Key Highlights
Build AI systems end-to-end: data to deployed, monitored product.
Work across two distinct contexts: customer-owned platforms and end-client engagements.
Requires breadth across data engineering, model development, backend, frontend, and MLOps.
Key Responsibilities
Design and build data ingestion, transformation, and feature pipelines for batch and real-time use cases.
Implement systematic approaches to detecting, flagging, and handling data quality issues.
Define and implement labeling approaches for supervised and fine-tuning tasks.
Build, evaluate, and iterate on models ranging from lightweight classifiers to large language model systems.
Design and implement LLM-based pipelines including prompt engineering, retrieval-augmented generation (RAG), structured output extraction, and chaining.
Run supervised and instruction fine-tuning workflows on open-weight models.
Build rigorous evaluation frameworks — offline benchmarks, human eval, and production monitoring.
Move fast on early-stage work to get a working proof-of-concept in front of stakeholders quickly.
Design and build production-grade APIs and backend services using Python-based frameworks.
Build functional, clean user interfaces for customer tools and end-client facing applications.
Connect AI systems to existing end-client infrastructure.
Build CI/CD pipelines for model and service deployment.
Implement monitoring for model performance in production, including accuracy degradation, data distribution shift, and output quality signals.
Track and optimise end-to-end latency across inference pipelines.
Build structured logging that makes production issues diagnosable.
Own inference and infrastructure costs, particularly for LLM-heavy systems.
Contribute to scoping conversations alongside the Product Lead.
Contribute reusable components and documentation back to customer tooling.
Technical Skills Required
Python LLM RAG Prompt Engineering Data Pipelines MLOps CI/CD API Development Frontend Development Backend Development Model Monitoring Logging Debugging Cost Control Vector Databases Semantic Retrieval
Benefits & Perks
Competitive salary
Benefits (PAYE, PENSION, Private Health Insurance)
Ongoing training and professional development
Performance-based relocation pathway to Eryk's operations in Poland
Nice to Have
Experience delivering AI systems in a consulting or agency context.
Fine-tuning experience, including data curation, training runs, and evaluation on open-weight models such as Llama, Mistral, or similar.
Familiarity with vector databases and semantic retrieval in production RAG systems.
Experience with LLM cost optimisation: caching layers, model routing, token budgeting, and batch inference strategies.

Job Description


About The Company

Eryk Group is an international technical service provider specialising in the delivery of electrical and mechanical installations, engineering, commissioning, and IT services across a wide range of industries. With a strong presence throughout Europe and beyond, Eryk is recognised for its expertise, quality, and flexibility in adapting to diverse project requirements worldwide. We operate a service hub in Lagos where we provide remote IT services to our customers in Europe. You can learn more about our company by visiting: www.eryk.it


Role Overview

This is a full-stack ML engineering role in the truest sense. You will build AI systems end-to-end — from raw data to deployed, monitored product — and you will do it across two distinct contexts: customer owned platforms, and end-client engagements where we are delivering bespoke AI solutions as a consulting partner.


That dual context matters. On customer projects, you are building for longevity — systems that scale, infrastructure that compounds, and decisions that you will live with. On the end-client projects, you are also building for speed and clarity — shipping working prototypes fast, iterating based on real feedback, and handing over systems that end-clients can actually maintain. You need to be equally good at both modes and know when each is appropriate.


This is not a role where you specialise in one layer of the stack. You will move across data engineering, model development, backend APIs, frontend interfaces, and MLOps. Depth in ML is essential. Breadth across the rest of the stack is what makes this role work.


WHAT YOU WILL OWN

Data Pipelines & Preparation

Good models start with good data. You will own the full data preparation layer — designing pipelines that are robust enough for production and flexible enough for experimentation.

  • Pipeline design & engineering: Build and maintain data ingestion, transformation, and feature pipelines for both batch and real-time use cases. Design for reliability and reproducibility, not just immediate functionality
  • Data cleaning & quality: Implement systematic approaches to detecting, flagging, and handling data quality issues. Build cleaning logic that is auditable and reusable across engagements rather than one-off scripts
  • Labeling strategy: Define and implement labeling approaches for supervised and fine-tuning tasks — including prompt-based annotation, weak supervision, and human-in-the-loop workflows where appropriate. Work with end-clients to establish labeling processes that they can sustain independently


Model Development

The core of the role. You will build, evaluate, and iterate on models ranging from lightweight classifiers to large language model systems, choosing the right approach for the problem rather than defaulting to the most complex one.

  • LLM systems: Design and implement LLM-based pipelines including prompt engineering, retrieval-augmented generation (RAG), structured output extraction, and chaining. Understand the trade-offs between hosted APIs and open-weight models and make recommendations accordingly
  • Fine-tuning: Run supervised and instruction fine-tuning workflows on open-weight models where task-specific adaptation is justified. Manage training data curation, evaluation, and iteration through the fine-tuning cycle
  • Model evaluation: Build rigorous evaluation frameworks — offline benchmarks, human eval, and production monitoring — so that model quality is measurable at every stage, not just assumed
  • Rapid prototyping: Move fast on early-stage work. Get a working proof-of-concept in front of stakeholders quickly, gather feedback, and use iteration to converge on the right solution rather than trying to design it perfectly upfront


Backend, API & Frontend Development

You will build the systems that wrap your models — the APIs that expose them, the services that orchestrate them, and the interfaces that let users interact with them. This is not a peripheral concern; it is how the work gets used.

  • Backend & API development: Design and build production-grade APIs and backend services using Python-based frameworks. Handle authentication, rate limiting, error handling, and versioning to the standard that an end-client's engineering team would expect to integrate against
  • Frontend development: Build functional, clean user interfaces for customer tools and end-client facing applications. The bar is not pixel-perfect design — it is interfaces that work reliably, communicate model behaviour clearly, and allow users to do their job
  • End-Client system integrations: Connect AI systems to existing end-client infrastructure — CRMs, databases, document stores, third-party APIs, and enterprise software. Understand data contracts, handle edge cases, and build integrations that are maintainable after handover


MLOps & Production Reliability

Shipping a model is the start of the work, not the end. You are accountable for what happens after deployment — whether systems stay accurate, stay fast, and stay within cost.

  • Deployment pipelines: Build CI/CD pipelines for model and service deployment. Automate testing, versioning, and rollback so that releasing a new model version is a low-risk, repeatable operation rather than a manual intervention
  • Monitoring — accuracy & drift: Implement monitoring for model performance in production, including accuracy degradation, data distribution shift, and output quality signals. Define alerting thresholds and own the response process when something degrades
  • Latency monitoring: Track and optimise end-to-end latency across inference pipelines. Identify bottlenecks — whether in model serving, data retrieval, or API orchestration — and address them before they affect user experience or SLA compliance
  • Logging & debugging: Build structured logging that makes production issues diagnosable. When something breaks or a model behaves unexpectedly, you should be able to trace the issue from output back to input without reconstructing context from scratch
  • Cost control: Own inference and infrastructure costs, particularly for LLM-heavy systems where token usage can scale unexpectedly. Implement caching strategies, model routing, batching, and right-sizing to keep costs predictable and within budget — both for customer products and end-client engagements with fixed-cost structures


WORKING ACROSS CUSTOMER & END-CLIENT CONTEXTS

A significant part of this role is adapting your engineering approach to context. On customer projects, you are building infrastructure that will outlast any single feature — prioritise extensibility, documentation, and long-term maintainability. On end-client engagements, you are often working under tighter timelines with less control over the surrounding environment — prioritise working software, clear interfaces, and systems that an end-client team can take ownership of after handover.


You will be expected to contribute to scoping conversations alongside the Product Lead — translating an end-client's problem into a realistic technical approach, flagging feasibility risks early, and helping ensure we only commit to things we can deliver well. Post-engagement, you are expected to contribute reusable components and documentation back to customer tooling so that the next similar project starts faster.


WHAT WE ARE LOOKING FOR

Must-Haves

  • 3–5 years of hands-on ML engineering experience: with a demonstrable track record of shipping models into production, not just running experiments
  • Strong Python engineering skills: including experience building production-grade APIs and services, not just notebooks and scripts
  • Practical LLM experience: including RAG architectures, prompt engineering, evaluation, and working with both hosted APIs and open-weight models
  • Experience building and maintaining data pipelines: that handle real-world data quality issues at scale
  • MLOps fundamentals: deployment pipelines, model monitoring, logging, and cost-aware infrastructure decisions
  • Comfort working across the full stack: you do not need to be a specialist frontend engineer, but you can build a functional interface and wire it to a backend without needing someone to do it for you
  • Clear technical communication: you can explain what a model does, why it behaves as it does, and what its limitations are — to both technical teammates and non-technical stakeholders


Strong Differentiators

  • Experience delivering AI systems in a consulting or agency context: where you have had to build, hand over, and support systems you did not have ongoing ownership of
  • Fine-tuning experience: including data curation, training runs, and evaluation on open-weight models such as Llama, Mistral, or similar
  • Familiarity with vector databases and semantic retrieval: in production RAG systems — not just toy implementations
  • Experience with LLM cost optimisation: caching layers, model routing, token budgeting, and batch inference strategies
  • Contributions to customer tooling or reusable ML infrastructure: that accelerated subsequent projects


WHAT WE OFFER


  • A full-stack role with real scope across data engineering, model development, APIs, frontend, and MLOps
  • Exposure to international clients and cross-border AI consulting projects
  • Competitive salary plus benefits (PAYE, PENSION, Private Health Insurance)
  • Ongoing training and professional development
  • A performance-based relocation pathway to Eryk's operations in Poland


A Team That Takes the Work Seriously

At Eryk, we build things that last and ship things that work. You will be part of a team that holds itself to a high standard and gives you the environment to do the same.


We are looking for engineers who take pride in systems that keep working long after the first demo. If you care as much about the monitoring dashboard as the model architecture, you will fit in here.


Similar Jobs

Explore other opportunities that match your interests

AI/ML Developer

Machine Learning
•
4w ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

punch group

Nigeria
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

Barcelona Supercomputing Cente...

Spain

Machine Learning Bioengineer

Machine Learning
•
8h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

physics world

United State

Subscribe our newsletter

New Things Will Always Update Regularly