AI Governance Lead (SDLC & Agentic AI Focus)
Reports To: Managing Director – AI Governance / Responsible AI
Role Overview As organisations accelerate adoption of Agentic AI and autonomous systems, this role ensures every AI-enabled solution is safe, compliant, and production‐ready before deployment.
This is a hands‐on AI Governance Lead position embedded directly into the Software Development Lifecycle (SDLC). You will operationalise Responsible AI principles across design, development, testing, deployment, and monitoring — implementing Shift‐Left Governance to identify risk before code reaches production.
You will work closely with AI engineering squads, model risk teams, security, and senior stakeholders to integrate Governance‐as‐Code frameworks, automate compliance controls, and oversee adversarial testing of AI systems.
Core Responsibilities
1. Design & Ideation (Guardrails)
* Lead AI Impact Assessments during the requirements and architecture phase
* Define allowable AI use cases and ethical boundaries
* Ensure models comply with regulatory expectations and Responsible AI principles
* Assess risk in agentic / autonomous AI workflows
2. Development & Coding (Shift‐Left Governance)
* Embed automated governance checks into IDEs (e.g., VS Code, Cursor) and CI/CD pipelines
* Implement Governance‐as‐Code frameworks to reduce manual review
* Scan for hallucinated libraries, insecure AI‐generated code, and prohibited data usage
* Integrate automated policy enforcement into MLOps / LLMOps workflows
3. Testing & Validation (Red‐Teaming & Adversarial Controls)
* Oversee adversarial testing and red‐teaming workstreams
* Conduct jailbreak testing to assess data leakage and prompt exploitation risks
* Validate models for bias, hallucinations, and security vulnerabilities
* Ensure robust Human‐in‐the‐Loop (HITL) controls
* Govern agent‐to‐agent communication and multi‐agent behaviour
4. Deployment & Lifecycle Monitoring
* Establish model drift, bias, and performance monitoring dashboards
* Define kill switches and rollback protocols for production AI
* Ensure audit‐ready model lineage, traceability, and observability
* Provide oversight across the full AI lifecycle, from pilot to production
Required Skills & Experience
* Experience in AI governance, model risk, data governance, or technology risk
* Strong technical capability embedding governance into SDLC pipelines
* Hands‐on experience with MLOps / LLMOps platforms such as Microsoft Azure Machine Learning or Amazon SageMaker
* Familiarity with adversarial AI testing tools such as Giskard, Deepchecks, or Microsoft Counterfit
* Practical knowledge of the NIST AI Risk Management Framework and ISO/IEC 42001
* Experience governing agentic AI / multi‐agent systems
* Ability to translate regulatory and ethical principles into technical enforcement controls
Key Performance Indicators
* Time-to-Trust: Reduce time for AI pilots to pass ethical and risk review
* Governance Automation %: Increase percentage of automated compliance checks within CI/CD
* Risk Mitigation: Zero High or Critical findings in post‐deployment AI audits
* Production Resilience: Effective drift detection and safe rollback in live environments
#J-18808-Ljbffr