Contract Research Collaboration: Fine-Tuning Large Language Models

tensorops Portugal
Remote
Apply
AI Summary

We are seeking a Master's or PhD student to fine-tune large language models for domain-specific tasks. The goal is to specialize an existing pretrained model for a narrow, high-value use case using efficient fine-tuning techniques. Strong Python skills and experience with deep learning frameworks are required.

Key Highlights
Fine-tuning pre-trained LLMs on small to medium datasets
Implementing parameter-efficient fine-tuning methods
Optimising training for cost and performance
Key Responsibilities
Fine-tuning pre-trained LLMs on small to medium datasets
Implementing parameter-efficient fine-tuning methods
Optimising training for cost and performance
Running experiments on GPU cloud infrastructure
Evaluating model performance and tradeoffs
Deploying fine-tuned models for inference
Technical Skills Required
Python PyTorch TensorFlow Hugging Face Transformers GPU cloud infrastructure LoRA
Benefits & Perks
100% Remote Work
Dynamic, High-Impact Projects
International Clients
Nice to Have
Previous experience using cloud platforms for model training or deployment
Experience working with or fine-tuning open-weight LLM families
Hands-on experience with LoRA

Job Description


Location: Remote

Duration: 2–4 months (project-based)

Type: Contract / Research Collaboration (Paid)

About the Project

We are looking for a Master’s or PhD student to work on fine-tuning large language models (LLMs) for domain-specific tasks. The goal is to take an existing pretrained model (e.g., Meta AI’s LLaMA-class models or similar) and specialize it for a narrow, high-value use case using efficient fine-tuning techniques.

This is a hands-on applied project designed for someone who wants real-world experience deploying and optimising LLM systems.

Help drive the next wave of applied AI by demonstrating how fine-tuned LLMs can unlock advanced, real-world use cases beyond general-purpose foundation models. Organizations that require domain-specific accuracy, self-hosted deployments, customisable workflows, or performance beyond out-of-the-box capabilities increasingly rely on fine-tuned models to meet those needs.

Through this project, you will contribute to building specialised AI systems that deliver improved accuracy, efficiency, and control compared to out-of-the-box models. You will also help bridge the gap between academic knowledge and real-world application by applying fine-tuning techniques to solve concrete business problems.

What You’ll Work On

  • Fine-tuning pre-trained LLMs on small to medium datasets (500–20k examples)
  • Implementing parameter-efficient fine-tuning (e.g., LoRA-style methods)
  • Optimising training for cost and performance
  • Running experiments on GPU cloud infrastructure
  • Evaluating model performance and tradeoffs (specialisation vs generalisation)
  • Deploying fine-tuned models for inference

Experience

  • Strong Python skills
  • Experience with deep learning frameworks: PyTorch (preferred) or TensorFlow
  • Experience with Hugging Face Transformers or similar ecosystems
  • Hands-on experience training or fine-tuning transformer models on GPUs (local or cloud-based)
  • Previous experience using cloud platforms for model training or deployment (e.g., AWS, GCP, Azure, RunPod or similar GPU providers)
  • Experience working with or fine-tuning open-weight LLM families (Gemma-3, Qwen-3.5, Llama 4, GPT-OSS, Mistral...)
  • Hands-on experience with LoRA

Understanding of:

  • Fine-tuning vs pretraining
  • Overfitting and generalization
  • Model evaluation
  • Strong business awareness: ability to understand the context of the fine-tuning task and translate domain requirements into clear modeling objectives

What you bring

  • MSc or PhD student in Computer Science, Machine Learning, AI, or related field
  • Alternatively, 6 months of hands-on experience training and fine-tuning deep learning models
  • Has worked on LLMs in research or industry
  • Has fine-tuned at least one transformer model
  • Comfortable working independently
  • Interested in applied AI and real-world constraints (cost, latency, memory)

What You’ll Gain

  • Real-world experience fine-tuning large models (30B–100B parameter class)
  • Exposure to production constraints and deployment
  • Opportunity to co-author technical writeups if applicable
  • Strong applied portfolio project

What We Offer

  • 100% Remote Work: Work from anywhere with flexibility and autonomy
  • Dynamic, High-Impact Projects: Work on cutting-edge ML and GenAI solutions across diverse industries
  • International Clients: Collaborate with global organizations and solve real-world challenges at scale
  • Urban Sports Club Membership: Supporting your physical and mental wellbeing
  • Monthly Bolt Credits: For rides
  • Company Events & Offsites: Regular team gatherings to connect, collaborate, and celebrate

Similar Jobs

Explore other opportunities that match your interests

Senior DevOps Engineer

Devops
6d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

Loka

Portugal
Visa Sponsorship Relocation Remote
Job Type Other
Experience Level Mid-Senior level

optiwisers

Portugal
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

rect solutions

Portugal

Subscribe our newsletter

New Things Will Always Update Regularly