Duration 2 days – 14 hrs
Overview
This training helps banks and fintechs build a practical governance and control system for AI initiatives (including GenAI), aligned to banking realities: model risk management, third-party dependency, fraud/scam threats, customer impact, regulatory scrutiny, and auditability.
It blends recognized global frameworks—NIST AI RMF (Govern–Map–Measure–Manage), ISO/IEC 42001 (AI management system), and ISO/IEC 23894 (AI risk guidance)—with PH regulatory touchpoints relevant to financial services: BSP’s thematic review on AI/ML controls and human oversight, BSP’s direction on model risk management (exposure draft), BSP operational resilience guidelines, BSP IT risk management amendments tied to AFASA, and NPC guidance when AI processes personal data.
Objectives
- Set up an AI governance operating model for banking/fintech: decision rights, committees, 3 lines of defense, and accountability.
- Create a risk-based AI use-case intake, classification, and approval workflow (fast-track vs enhanced review).
- Implement AI controls across the lifecycle (SDLC/MLOps): validation, backtesting, stress testing, bias testing, and human oversight.
- Embed privacy governance for AI (transparency, lawful basis, data minimization, human intervention for high-impact automated decisions).
- Strengthen security and fraud/scam resilience, aligning with banking IT risk expectations (e.g., monitoring/detection, suspicious activity blocking, controls for electronic products/services).
- Define monitoring KRIs/KPIs, incident response, and assurance/audit readiness—addressing vulnerabilities like third-party concentration and model governance challenges.
Target Audience
- Board/ExCom sponsors, Digital/Innovation heads, Product Owners
- Data/AI/ML teams, Architecture, IT Security
- Risk Management (Operational/Model/Enterprise Risk), Compliance/Legal, DPO/Privacy
- Fraud/Financial Crime teams (AI in detection/monitoring)
- Procurement/Vendor Management & Third-Party Risk
- Internal Audit / Model Validation / Assurance
Prerequisites
- No advanced AI background required
- Helpful: basic familiarity with your SDLC/change management, risk assessment process, and vendor onboarding
Course Outline
Day 1 — Governance foundations (bank-grade)
Module 1: AI in banking/fintech—use cases, risks, and “why governance” (
- Typical use cases: credit decisioning, fraud/AML analytics, customer service/chatbots, collections, personalization
- Common failure modes: unmanaged models, poor explainability, privacy gaps, vendor black boxes, drift, and accountability gaps
Module 2: AI governance operating model & decision rights
- AI Steering/Model Risk Committee, product ownership, independent validation, audit
- AI inventory + model inventory + materiality tiers (what needs enhanced review)
Module 3: Framework mapping to bank controls
- NIST AI RMF: GOVERN–MAP–MEASURE–MANAGE (how it translates to bank policy and control evidence)
- ISO/IEC 42001 and ISO/IEC 23894: building an AI management system and risk process
- Optional: MAS FEAT principles for financial-sector responsible AI mindset (Fairness, Ethics, Accountability, Transparency)
Workshop A: AI use-case intake + risk tiering (bank scenarios)
- Teams classify 3 use cases (e.g., credit scoring, GenAI chatbot, fraud model) and decide required governance gates + evidence
Module 4: Banking/fintech policy set and minimum control standards
- Required policies: acceptable GenAI use, data governance, model documentation, human oversight, vendor controls, incident reporting
- “Minimum evidence pack” per tier: model card, data sheet, validation report, monitoring plan, approvals
Day 2 — Lifecycle controls, resilience, privacy, and assurance
Module 5: Model Risk Management controls (bank-specific)
- Model lifecycle governance: development, validation, deployment, monitoring, change control, retirement
- What BSP observed/expected in practice: validation, backtesting/stress testing, bias testing, and stronger human oversight mechanisms
Module 6: Privacy governance for AI (Philippines context)
- NPC guidance when AI involves personal data: transparency, accountability, fairness, accuracy, data minimization, lawful basis
- High-impact decisions: meaningful human intervention + contestability mechanisms (what “good” looks like)
Module 7: Security, scam/fraud threats, and IT risk management alignment
- Threat model for AI in fintech: prompt injection, data leakage, account takeover amplification, synthetic fraud
- Aligning AI controls with BSP IT risk management expectations and AFASA-linked controls for electronic products/services and monitoring
Module 8: Operational resilience for AI-enabled services
- Align AI services to operational resilience governance and disruption handling (critical operations, dependencies, recovery)
- Practical playbooks: model rollback, feature flags, kill switch, vendor outage drills
Workshop B: Incident + escalation simulation
- Scenario: AI model drift causes bad approvals / GenAI chatbot gives harmful guidance / vendor outage
- Decide: containment, customer impact actions, governance escalation, evidence capture
Module 9: Monitoring, KRIs/KPIs, third-party concentration, and audit readiness
- KRIs: drift, error rates, bias metrics, privacy/security incidents, override rates, complaint signals
- Managing third-party and concentration vulnerabilities highlighted by financial authorities (GenAI supply chain risk)
- Audit-ready documentation and testing cadence
Wrap-up: Build your “AI Governance Starter Pack”
- Draft outputs: AI use-case intake form, risk tiering matrix, RACI, minimum evidence checklist, monitoring dashboard outlin


