Job Description
Roles & Responsibilities
Role Overview
The Data & AI Governance Lead will design, implement, and operationalize enterprise-grade data and model governance frameworks to support AI, ML, and GenAI initiatives within regulated environments. The role ensures that AI solutions are secure, compliant, auditable, and scalable, while enabling innovation across business stakeholders. The position partners closely with Data Management Office (DMO), Risk, Compliance, Finance, IT, and business teams to translate regulatory expectations into practical and enforceable controls across Azure, Databricks, and AI platforms.
Key Responsibilities
1. Data Governance for AI
- Define and operationalize enterprise data classification, metadata standards, and attribute tagging for AI usage, including PII/PCI/PHI, sensitivity tiers, usage restrictions, and consent.
- Design access and entitlement models for AI and agent-based workloads, including:
- Azure Entra ID integration
- RBAC/ABAC entitlement models
- Privileged access management
- Break-glass procedures
- End-to-end audit trails and traceability
- Establish metadata, catalog, and lineage operating models using Collibra and/or Microsoft Purview, covering:
- Lineage for AI pipelines
- Governance of API-based data acquisition
- Governance of vector databases and RAG stores
- Define cross-border data usage rules aligned with MAS, HKMA, JFSA, GDPR, PDPA and other regulatory frameworks.
- Set governance boundaries for platforms such as Databricks and Azure, including:
- Unity Catalog governance
- Lakehouse permissions and Delta Sharing
- Data residency and jurisdictional controls
2. Model Governance (LLM / ML)
- Implement lifecycle governance across ML and LLM models, including:
- Model inventory and registration
- Risk classification and tiering
- Approval workflows and sign-offs
- Model documentation and validation standards
- Human-in-the-loop controls and checkpoints
- Establish observability and monitoring for ML/LLM models, including:
- Drift, bias, and performance monitoring
- Prompt and output logging
- Toxicity detection and filtering
- Rollback mechanisms and release governance
- Integration with MLflow and Model Registry platforms
- Operationalize AI safety, including:
- Red-teaming and adversarial testing
- Secure prompt design principles
- Sensitive data minimization and retention
- AI incident response and escalation procedures
Are you interested in this position?
Apply by clicking on the “Apply Now” button below!
#GraphicDesignJobsOnline
#WebDesignRemoteJobs #FreelanceGraphicDesigner #WorkFromHomeDesignJobs #OnlineWebDesignWork #RemoteDesignOpportunities #HireGraphicDesigners #DigitalDesignCareers# Dynamicbrand guru
Apply Now