We are working with a market leading financial services client who are aggressively deploying Generative AI and LLMs to redefine their competitive edge. However, innovation without infrastructure is a liability. They are looking for an established AI Governance & Risk Specialist to design, implement, and lead the guardrails that ensure their AI strategy is ethical, compliant, and secure.
You aren't here to say "No"—you are here to show us how to say "Yes" safely.
Key Responsibilities
* Framework Design: Build the end-to-end AI Governance Framework from scratch, covering model transparency, bias mitigation, and data lineage.
* Guardrail Implementation: Establish clear policies for the use of Third-Party LLMs (e.g., ChatGPT, Claude) and internal proprietary models to prevent data leakage and "Shadow AI."
* Regulatory Navigation: Act as the internal SME on global AI regulations (EU AI Act, NIST AI RMF, local privacy laws) and translate them into technical requirements.
* Algorithmic Auditing: Partner with Data Science teams to conduct "Pre-flight" risk assessments on AI use cases, focusing on explainability and "Human-in-the-loop" protocols.
* Stakeholder Diplomacy: Bridge the gap between the Board (Risk), Legal (Compliance), and Engineering (Speed) to ensure a unified AI safety culture.
What You Bring
* The Hybrid Profile: 7+ years in Data Governance, Privacy, or Tech Risk, with at least 2 years focused on Machine Learning or AI ethics.
* Technical Literacy: You don’t need to code the models, but you must understand how weights, tokens, and training data impact risk.
* Regulatory Instinct: Deep knowledge of GDPR/APPs and emerging AI-specific legal frameworks.
* Influence: The ability to walk into a room of developers and explain why "Black Box" models won't pass a risk audit without losing their respect.
#J-18808-Ljbffr