Senior AI Engineer (GenAI) - Next-Generation AI Capabilities

interactiveai • France
Relocation
Apply
AI Summary

Develop next-generation AI capabilities, advance LLM and SLM workflows, and contribute to core model development. Work closely with the Chief of AI and cross-functional squad to design, experiment, and deploy cutting-edge AI models. Optimize inference speed, memory usage, and cost across LLM/SLM deployments.

Key Highlights
Develop next-generation AI capabilities
Advance LLM and SLM workflows
Contribute to core model development
Technical Skills Required
Python PyTorch TensorFlow JAX Airflow Spark Dagster AWS GCP Azure
Benefits & Perks
Competitive base salary (€90,000/yr to €110,000/yr) + performance bonuses
Access to equity/share plan
Private health insurance
Flexible work setup + travel when needed

Job Description


[Relocation to Madrid/Lisbon]


About InteractiveAI

InteractiveAI is a fast-growing startup on a mission to empower enterprises with fully managed AI agent lifecycles.

We are building the next generation of enterprise-AI solutions, delivering an end-to-end Agentic IDE alongside an extensible ecosystem of agentic resources and solutions. Our platform allows companies to orchestrate, monitor, evaluate, deploy and improve AI agents—and soon fine-tune and own their own models.

We value autonomy, speed, and innovation, and we’re assembling a world-class team to match. Our squads are lean, highly skilled, and execution-driven.

If you thrive in high-performance environments and want to join a company that rewards transformational outcomes, this is for you.


What You’ll Do

As a Senior AI Engineer (GenAI) at InteractiveAI, you’ll play a key role in developing our next-generation AI capabilities, advancing our LLM, SLM, and fine-tuning workflows while contributing to the core model development that powers our platform.

You’ll work closely with the Chief of AI and a cross-functional squad to design, experiment with, and deploy cutting-edge foundation models, agentic architectures, and evaluation frameworks. You will own hands-on experimentation, model training, optimization, and productionization—helping to push the boundaries of GenAI performance inside enterprise environments.

You’ll contribute to org-wide AI standards, model development best practices, and high-quality engineering execution.

  • Build and maintain scalable pipelines for structured/unstructured data ingestion, transformation, and feature engineering
  • Deploy ML models, LLMs, and SLMs into production, ensuring performance, reliability, and traceability
  • Develop fine-tuning pipelines for foundation models, with versioned checkpoints, experiment tracking, and evaluation workflows
  • Implement automated evaluation frameworks (A/B testing, LLM-as-judge, validation suites) and dashboards tracking latency, accuracy, drift, and maintenance triggers
  • Develop feature engineering, imputation, and data transformation strategies for complex, real-world use cases
  • Implement and optimize retrieval-augmented generation (RAG) pipelines, vector search, and grounding strategies
  • Build enterprise-grade agentic workflows, integrate tools, and evaluate agentic system performance
  • Optimize inference speed, memory usage, and cost across LLM/SLM deployments
  • Ensure reliability and performance of models in production, addressing issues around latency, accuracy, drift, and scaling
  • Collaborate with product and delivery teams to ship measurable, client-ready AI capabilities and accelerate new GenAI features


What We’re Looking For

We’re looking for a highly skilled AI engineer with strong foundations, hands-on GenAI experience, and a track record of building production-grade AI systems. You should be capable of contributing to core architecture discussions while also executing end-to-end model development work.


Minimum Requirements:

  • 4+ years in data engineering, ML engineering, applied AI, or related deep technical roles
  • Experience deploying ML models and LLMs/SLMs to production, with strong inference optimization skills
  • Hands-on experience with agent orchestration tools (LangGraph, LlamaIndex, or similar)
  • Experience training deep learning models and fine-tuning LLMs using modern frameworks
  • Fluency in Python and experience with at least one deep learning framework (PyTorch preferred; TensorFlow or JAX also acceptable)
  • Strong experience building production-grade data pipelines (batch or streaming) using tools like Airflow, Spark, or Dagster
  • Solid understanding of ML theory (bias-variance tradeoff, metrics, optimization, evaluation, probability, etc.)
  • Comfortable with cloud platforms (AWS, GCP, Azure) and containerized deployments
  • Strong communication skills and ability to work effectively in cross-functional teams


Additional Requirements:

  • Experience with LLMs/SLMs and RAG pipelines in production
  • Familiarity with vector databases, embeddings, and document retrieval strategies
  • Exposure to MLOps practices (monitoring, reproducibility, CI/CD for ML, automated evaluation)
  • Experience optimizing inference latency, throughput, and cost at scale
  • Experience working in regulated or enterprise environments (e.g., banking, insurance)
  • Bonus: experience with model distillation, quantization, or training smaller models (SLMs)


Who You Are

Proactive & Resourceful: You anticipate challenges, propose solutions, and help push model performance forward.

High-Ownership Engineer: You move with accountability, take responsibility for outcomes, and consistently raise the bar.

Entrepreneurial & Adaptive: You thrive in ambiguity, operate with speed, and deliver in a high-paced startup setting.

Collaborative Teammate: You work well across disciplines and help foster a culture of high performance.


What You’ll Get

  • Competitive base salary (€90,000/yr to €110,000/yr) + performance bonuses
  • Access to equity/share plan as it rolls out.
  • Health & wellness allowances
  • Private health insurance
  • Flexible work setup + travel when needed (ideally Hybrid in Lisbon or Madrid)
  • 25 days of holidays/paid time off (excluding local public holidays)


Interview Process

We keep our process focused and respectful of your time. Most candidates complete it in 2–3 weeks. Here’s what to expect:

  1. Intro Call – 30 minutes to align on fit and expectations
  2. Take-Home Challenge – A practical task based on real-world problems
  3. Technical Interview – Deep dive into the challenge, technical experience, and AI engineering
  4. Cultural and Values Interview – Discussion on motivation, cultural and value alignment
  5. Offer – Final conversation and offer

We’re building a team of builders — people who care about impact, quality, and growth. If that’s you, let’s talk — careers@interactive.ai


Subscribe our newsletter

New Things Will Always Update Regularly