We're seeking a senior Python developer to build LLM-powered internal tools and MCP servers that integrate with our production systems. The role involves building Python services that expose capabilities to LLMs and handle workflows, data modeling, and API design. The ideal candidate has 3+ years of experience shipping production services and has built real LLM integrations.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
Remote · Full-time · EU or North American time zone overlap
We've got production systems that actually matter — real traffic, real Kubernetes clusters, real consequences when things break. We're building out a set of LLM-powered internal tools and MCP servers that sit on top of those systems: ticket analysis, support tooling, workflow automation, and integrations with our CRM and operational data. This role is about building that layer.
We're not looking for an LLM researcher or a prompt engineer. We're looking for a solid Python developer who's comfortable being pointed at "build an MCP server that does X" or "wire this workflow into our internal LLM stack" and can execute without needing to be walked through every decision.
What you'll actually work on
The bulk of the work is building Python services that integrate with LLMs and expose capabilities to them — MCP servers, FastAPI backends for LLM-driven workflows, retrieval and context-assembly pipelines, and the glue between our operational systems (WHMCS, Postgres, internal APIs) and the models that need to reason over them.
You'll own services end-to-end: schema, API surface, tests, CI/CD, deployment. When a tool returns garbage to the model and the model confidently hands garbage to a user, you'll be part of figuring out whether the bug is in the data layer, the tool contract, the prompt, or the workflow — and fixing it at the right level.
Some of this is greenfield. Some of it is extending things we've already built. All of it runs in production against real users, so "it worked on my machine with a toy prompt" is not where we stop.
Interested in remote work opportunities in Development & Programming? Discover Development & Programming Remote Jobs featuring exclusive positions from top companies that offer flexible work arrangements.
The stack
Backend: Python, FastAPI, Django, PostgreSQL, Redis
LLM/MCP: Open WebUI, Ollama, Anthropic and OpenAI APIs, MCP (Model Context Protocol), retrieval-augmented workflows
Infrastructure: Kubernetes, Docker, GitHub Actions, Gitea Actions
You'll also need solid footing in API design, auth, and data modeling. You don't need to be a networking or platform expert, but you need to be comfortable reading a Kubernetes manifest and understanding what your service is running on.
Who we're looking for
You're a competent Python developer with 3+ years of professional experience shipping production services. You've built real APIs, you've debugged production incidents, and you understand the difference between "the tests pass" and "this is actually ready."
You've shipped at least one real LLM integration — an API-backed feature, a RAG pipeline, an agent, an MCP server, something. You understand that LLM outputs are non-deterministic, that tool contracts matter more than clever prompts, and that observability on these systems is harder than on normal backends. You don't need to be an expert in model selection, fine-tuning, or eval frameworks — but you know the vocabulary and you're not going to be surprised when a model hallucinates a field that doesn't exist.
Browse our curated collection of remote jobs across all categories and industries, featuring positions from top companies worldwide.
Familiarity with MCP is a plus. If you haven't built an MCP server yet, you should be able to read the spec and ship one within your first couple of weeks.
You take direction well. The LLM/product direction comes from elsewhere on the team — your job is to build what's specified, push back when the spec is wrong, and ship something durable. You're not looking for a role where you get to decide which model we use this quarter.
English B1+ is required. We're fully distributed and async communication is a real skill here.
Applying
We ask all candidates to answer a few questions in writing. A strong answer here matters more than a polished CV.
- Describe a backend system you've owned in production — what it did, what broke, and what you changed.
- - Walk us through an LLM-integrated feature or service you've shipped. What did the tool/API surface look like, how did you handle failure modes (hallucinations, timeouts, malformed outputs), and what would you do differently next time?
- - You're asked to build an MCP server that exposes ticket data from an internal CRM to an LLM-powered support assistant. Sketch the approach: what tools you'd expose, how you'd shape the data, what you'd be careful about, and what you'd want clarified before starting.
- - Walk us through how you'd structure and deploy a Python service from scratch: testing, CI/CD, environments, secrets.
Similar Jobs
Explore other opportunities that match your interests
agilegrid solutions
Senior Technical Talent Partner
HealthEquity