Conversational AI Evaluator

YO IT CONSULTING • Netherlands
Remote
Apply
AI Summary

Evaluate conversational AI systems for accuracy, reasoning, and completeness. Conduct fact-checking, accuracy testing, and code quality assessment. Collaborate with AI teams to improve model performance.

Key Highlights
Evaluate conversational AI systems
Conduct fact-checking and accuracy testing
Assess code quality and model performance
Key Responsibilities
Evaluate LLM-generated responses
Conduct fact-checking using trusted public sources
Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies
Technical Skills Required
Python Java C++ JavaScript SQL Powershell Bash Swift Kotlin R TypeScript HTML/CSS
Benefits & Perks
Remote work
Weekly payments on Stripe or Wise
Flexible engagement options
Nice to Have
Prior experience with RLHF, model evaluation, or data annotation work
Track record in competitive programming
Experience reviewing code in production environments

Job Description


Role - Permanent Remote

Contract - Independent Contractor (Short-term Project)

This is an Independent Contractor role only

No C2C or C2H options are available

Both Part-Time and Full-Time engagement options are possible

Role Overview

partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions.

In coding and software engineering contexts, conversational AI systems must demonstrate correct reasoning, strong problem-solving ability, and adherence to real-world engineering best practices. This project focuses on evaluating and improving how models reason about code, generate solutions, and explain technical concepts across a variety of programming tasks and complexity levels.

What You’ll Do

  • Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness
  • Conduct fact-checking using trusted public sources and authoritative references
  • Conduct accuracy testing by executing code and validating outputs using appropriate tools
  • Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies
  • Assess code quality, readability, algorithmic soundness, and explanation quality
  • Ensure model responses align with expected conversational behavior and system guidelines
  • Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines

Who You Are

  • You hold a BS, MS, or PhD in Computer Science or a closely related field
  • You have significant (5+ years) real-world experience in software engineering or related technical roles
  • You are an expert in at least two relevant programming languages (e.g., Python, Java, C++, C, JavaScript, Go, Rust, Ruby, SQL, Powershell, Bash, Swift, Kotlin, R, TypeScript, HTML/CSS)
  • You are able to solve HackerRank or LeetCode Medium and Hard-level problems independently
  • You have experience contributing to well-known open-source projects, including merged pull requests
  • You have significant experience using LLMs while coding and understand their strengths and failure modes
  • You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws

Nice-to-Have Specialties

  • Prior experience with RLHF, model evaluation, or data annotation work
  • Track record in competitive programming
  • Experience reviewing code in production environments
  • Familiarity with multiple programming paradigms or ecosystems
  • Experience explaining complex technical concepts to non-expert audiences

What Success Looks Like

  • You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions
  • Your feedback improves the correctness, robustness, and clarity of AI coding outputs
  • You deliver reproducible evaluation artifacts that strengthen model performance
  • Trust AI systems to assist reliably with real-world coding tasks

Contract and Payment Terms

  • You will be engaged as an independent contractor.
  • This is a fully remote role that can be completed on your own schedule.
  • Projects can be extended, shortened, or concluded early depending on needs and performance.
  • Payments are weekly on Stripe or Wise based on services rendered.

Similar Jobs

Explore other opportunities that match your interests

Data and Reporting Analyst

Data Science
•
3h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

Lensa

United State
Visa Sponsorship Relocation Remote
Job Type Internship
Experience Level Mid-Senior level

Lensa

United State

Senior Data Engineer (AWS Migration)

Data Science
•
12h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

Signify Technology

Poland

Subscribe our newsletter

New Things Will Always Update Regularly