Join Bluebird's small, highly technical team as an LLM Engineer to take ownership of language-model systems end-to-end. Design, build, and scale LLM-powered applications in Python, including RAG pipelines, orchestration logic, and multi-model workflows. Collaborate with senior researchers and experienced engineers to solve meaningful problems with production-grade AI.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
Location: London
Our client is a small, highly technical team building real-world LLM-powered systems and agentic applications - tools that are already in the hands of users, not just demos or experiments.
They focus on solving meaningful problems with production-grade AI, combining strong engineering fundamentals with thoughtful experimentation. The team is pragmatic, product-oriented, and deeply cares about clean code, fast iteration, and systems that scale.
They’re now looking for an LLM Engineer to take ownership of language-model systems end to end - from training and fine-tuning through to evaluation and deployment.
🔥 What You’ll Get
- Competitive salary - up to €110,000 salary, depending on experience
- Ownership of production LLM systems at scale, with the freedom to work across the full ML lifecycle on top of strong, production-ready infrastructure
- Close collaboration with senior researchers and experienced engineers
🧠Your Impact
- Own the full ML lifecycle — training, fine-tuning, evaluation, and deployment of LLM systems into production
- Design, build, and scale LLM-powered and agentic applications in Python, including RAG pipelines, orchestration logic, and multi-model workflows using modern LLM APIs (OpenAI, Anthropic, Gemini)
- Deploy and operate models on production-grade infrastructure, including Kubernetes, cloud platforms, and multi-GPU environments
✅ What We’re Looking For
- Strong Python engineering fundamentals with hands-on experience building and deploying LLM-based systems in production
- Solid understanding of RAG architectures, embeddings, evaluation techniques, and modern LLM tooling (e.g. PyTorch, Hugging Face, FastAPI)
- Experience working with production ML infrastructure, including model deployment, monitoring, and iteration
🚀 What Will Make You Stand Out
- Experience with agentic workflows, tool calling, or multi-step LLM systems
- Familiarity with ML observability and experimentation tools (MLflow, W&B)
- Experience working with computer vision or multi-modal AI systems
- Comfort deploying models on cloud infrastructure (AWS/GCP, Kubernetes)
Ready to build LLM systems that power real products used at scale - not hype-driven experiments?
Apply now to work on pragmatic, production-first AI with real users and real impact.
Similar Jobs
Explore other opportunities that match your interests
Haystack
crossing hurdles