AI Red-Teaming Security Specialist

Alignerr • Australia
Remote
Apply
AI Summary

Conduct red-teaming exercises to uncover security weaknesses in AI systems. Evaluate AI outputs for safety risks, bias, and policy compliance. Collaborate with engineering teams to recommend practical mitigations and improvements.

Key Highlights
Red-team AI models
Evaluate AI outputs for safety risks
Collaborate with engineering teams
Key Responsibilities
Conduct red-teaming exercises
Design and execute adversarial prompts
Evaluate AI outputs
Document vulnerabilities
Collaborate with engineering teams
Technical Skills Required
Cybersecurity concepts Threat modeling Penetration testing AI/ML systems LLMs Prompt engineering
Benefits & Perks
Fully remote work
Flexible contract role
10–40 hours/week
Nice to Have
Familiarity with open-source AI platforms
Background in infosec, ethical hacking, or AI safety research

Job Description


About The Role

What if your job was to find every way an AI system could be fooled, manipulated, or exploited? That's exactly what we're hiring for. We're looking for security-minded professionals to red-team AI models, probe safety guardrails, and help make the next generation of AI systems more robust and trustworthy.

This is a fully remote, flexible contract role where your work directly shapes the safety of AI products used by millions of people.

  • Organization: Alignerr
  • Type: Hourly Contract
  • Location: Remote
  • Commitment: 10–40 hours/week

What You'll Do

  • Conduct red-teaming exercises to uncover security weaknesses in AI systems
  • Design and execute adversarial prompts and edge-case scenarios to stress-test model guardrails
  • Evaluate AI outputs for safety risks, bias, and policy compliance
  • Document vulnerabilities, unexpected behaviors, and exploits in clear, structured reports
  • Collaborate with engineering teams to recommend practical mitigations and improvements
  • Stay current on emerging AI security threats, jailbreak techniques, and evolving best practices
  • Help define and refine security evaluation rubrics and testing protocols

Who You Are

  • You have a solid understanding of cybersecurity concepts, threat modeling, or penetration testing
  • You've worked hands-on with AI/ML systems, LLMs, or prompt engineering
  • You're a creative, analytical thinker who enjoys breaking things to make them better
  • You write clearly and document your findings with precision
  • You're comfortable working independently on asynchronous, task-based assignments
  • Familiarity with open-source AI platforms is a plus
  • A background in infosec, ethical hacking, or AI safety research is a bonus — but not required

Why Join Us

  • Work at the frontier — contribute to one of the most critical and fast-moving areas in tech: AI safety and security
  • Real impact — your findings directly improve AI systems relied on by millions of users worldwide
  • Full flexibility — set your own schedule and work from anywhere, fully remote
  • Build rare expertise — deepen your skills in AI red-teaming, a field with enormous and growing demand
  • Ongoing opportunity — strong performers are considered for expanded scope and contract extensions
  • Collaborate globally — work alongside researchers and engineers from top AI labs around the world


Similar Jobs

Explore other opportunities that match your interests

Security Operations Analyst for AI Training

Cyber Security
•
17h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Entry level

Alignerr

Australia

Cybersecurity AI Model Validator

Cyber Security
•
1d ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Entry level

DataAnnotation

Australia
Visa Sponsorship Relocation Remote
Job Type Part-time
Experience Level Entry level

two words away

Australia

Subscribe our newsletter

New Things Will Always Update Regularly