About The Role
What if your job was to find every way an AI system could be fooled, manipulated, or exploited? That's exactly what we're hiring for. We're looking for security-minded professionals to red-team AI models, probe safety guardrails, and help make the next generation of AI systems more robust and trustworthy. This is a fully remote, flexible contract role where your work directly shapes the safety of AI products used by millions of people.
* Organization: Alignerr
* Type: Hourly Contract
* Location: Remote
* Commitment: 10–40 hours/week
What You'll Do
* Conduct red-teaming exercises to uncover security weaknesses in AI systems
* Design and execute adversarial prompts and edge-case scenarios to stress-test model guardrails
* Evaluate AI outputs for safety risks, bias, and policy compliance
* Document vulnerabilities, unexpected behaviors, and exploits in clear, structured reports
* Collaborate with engineering teams to recommend practical mitigations and improvements
* Stay current on emerging AI security threats, jailbreak techniques, and evolving best practices
* Help define and refine security evaluation rubrics and testing protocols
Who You Are
* You have a solid understanding of cybersecurity concepts, threat modeling, or penetration testing
* You've worked hands‐on with AI/ML systems, LLMs, or prompt engineering
* You're a creative, analytical thinker who enjoys breaking things to make them better
* You write clearly and document your findings with precision
* You're comfortable working independently on asynchronous, task-based assignments
* Familiarity with open-source AI platforms is a plus
* A background in infosec, ethical hacking, or AI safety research is a bonus — but not required
Why Join Us
* Work at the frontier — contribute to one of the most critical and fast-moving areas in tech: AI safety and security
* Real impact — your findings directly improve AI systems relied on by millions of users worldwide
* Full flexibility — set your own schedule and work from anywhere, fully remote
* Build rare expertise — deepen your skills in AI red‐teaming, a field with enormous and growing demand
* Ongoing opportunity — strong performers are considered for expanded scope and contract extensions
* Collaborate globally — work alongside researchers and engineers from top AI labs around the world
#J-18808-Ljbffr