High-growth consumer technology company seeks Senior AI Risk and Compliance Analyst to own complex AI-related risk assessments and develop governance frameworks.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
AI Risk & Compliance Analyst
This is a senior individual contributor role within GRC at a high-growth consumer technology company that has gone all-in on AI. We're talking LLM-powered product features, coding assistants across engineering, enterprise AI tooling, and proprietary internal agents. The security function is new but scaling fast, and the CISO needs someone who can own the complex AI-related risk assessments that currently land on her desk. This person will be the subject matter expert on AI governance, not a generalist doing a bit of AI work on the side.
Compensation: $100,000 to $135,000 base salary plus equity, 200k+ TC. Level can flex up for exceptional candidates.
Logistics: On-site, East Coast. Relocation support available for the right candidate.
Looking to advance your Development & Programming career with relocation support? Explore Development & Programming Jobs with Relocation Packages that include comprehensive packages to help you move and settle in your new role.
Here's what you'll be doing:
- Own complex third-party risk assessments for AI vendors, LLM platforms, AI APIs, and enterprise AI tools. You will evaluate risks that go beyond checkbox compliance, thinking through integration dependencies, data flows, identity implications, and maturity gaps.
- Develop and maintain AI-specific governance frameworks, policies, and controls aligned with ISO 27001, NIST CSF, NIST AI RMF, EU AI Act, and other applicable standards.
- Partner with engineering and product teams to translate technical AI risks (data poisoning, prompt injection, model misuse, data leakage, explainability gaps) into documented control requirements.
- Track emerging AI regulations and guidance, translating them into actionable program updates and compliance recommendations.
- Support audit activities and coordinate cross-functional stakeholders for compliance reviews involving AI systems.
- Apply your AI knowledge internally to drive operational efficiencies within GRC and InfoSec.
Discover our full range of relocation jobs with comprehensive support packages to help you relocate and settle in your new location.
And what you need to have:
- 2+ years of hands-on experience performing governance or risk assessments for AI/ML systems, including LLM integrations, model pipelines, AI agents, or retrieval-augmented workflows.
- Technical fluency with AI architectures. You can talk through building a RAG implementation, explain the difference between RAG and MCP workflows, and identify where the security and compliance pitfalls live.
- Experience conducting third-party risk assessments for AI vendors, LLM platforms, or ML service providers.
- Familiarity with relevant frameworks: ISO 27001, NIST CSF, NIST AI RMF, ISO 42001, GDPR, or similar.
- Strong understanding of data governance concepts relevant to AI: training data lineage, data retention, model output handling, and human oversight requirements.
- Proven ability to manage high volume and context switch effectively. Consulting background or similar high-intensity environment strongly preferred.
- Bachelor's degree in Information Security, Computer Science, Business Risk, Compliance, or related field. Relevant certifications (CISA, CISM, CRISC, CISSP, AIGP) a plus but not required.
Interested in relocating to United State? Check out our comprehensive Relocation Jobs in United State page with detailed relocation packages and benefits.
No CTC or sponsorship at this time.
Similar Jobs
Explore other opportunities that match your interests
Senior Software Engineer - Agentic AI
Code Metal
Senior/Staff Power Electronics Engineer
h3x technologies
Manufacturing Operations Manager, Intelligent Systems