Optomi seeks a Senior AI Quality Engineer to ensure reliability, accuracy, and scalability of next-generation AI systems. The ideal candidate will design and implement end-to-end testing strategies and collaborate with AI engineers and platform teams. 7+ years of experience in software quality engineering or test automation is required.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Job Description
Senior AI Quality Engineer - (LLM / Agentic Systems) - 100% Remote
Optomi, in partnership with a major leader in the airline and travel technology industry, is seeking an Agentic QA Engineer to help ensure the reliability, accuracy, and scalability of next-generation AI systems powering modern software delivery and operational workflows.
This role will focus on testing and validating generative AI and agent-based systems, including complex multi-agent architectures responsible for automation, decision-making, and workflow orchestration. The ideal candidate will design and implement end-to-end testing strategies, build reusable test frameworks, and validate the performance and resilience of AI-driven systems operating in production environments.
You will collaborate closely with AI engineers, platform engineers, MLOps teams, and operations leaders to ensure agentic systems operate reliably at scale while meeting strict performance, safety, and compliance requirements.
Interested in remote work opportunities in Development & Programming? Discover Development & Programming Remote Jobs featuring exclusive positions from top companies that offer flexible work arrangements.
What the Right Professional Will Enjoy!
- The opportunity to work on cutting-edge AI and multi-agent systems that automate and enhance complex enterprise workflows
- Partnering with AI engineers, data scientists, and platform teams to bring generative AI systems from development into production
- Designing testing frameworks for next-generation autonomous systems, including planner-executor models and multi-agent orchestration
- Building evaluation pipelines that measure accuracy, reliability, safety, and cost performance of AI-driven applications
- Working within a highly collaborative engineering environment focused on innovation, scalability, and operational excellence
Apply Today If Your Background Includes
Browse our curated collection of remote jobs across all categories and industries, featuring positions from top companies worldwide.
- 7+ years of experience in software quality engineering or test automation, including experience designing testing frameworks
- 2+ years of experience working with AI/ML systems, generative AI applications, or LLM-based platforms
- Strong programming experience with Python, TypeScript, or JavaScript for building test harnesses and automation frameworks
- Experience evaluating LLM outputs using techniques such as semantic similarity, embeddings, or traditional NLP evaluation metrics
- Background testing distributed systems, including resiliency, latency profiling, and fault tolerance
- Familiarity with agent orchestration frameworks such as LangChain, LangGraph, LlamaIndex, DSPy, or similar tooling
- Experience working with CI/CD pipelines and modern observability platforms (Datadog, Prometheus, OpenTelemetry, Grafana)
- Understanding of security, safety, and compliance considerations for AI systems, including PII handling and model guardrails
Similar Jobs
Explore other opportunities that match your interests
transcend staffing solutions l...
Senior Front-end Developer (React)
Embrace Software Inc