Status: Active
Version: 1.0
Date: 2025-12-25
Related ADRs: ADR-028
Related PRDs: PRD-010
This reference document captures the integration of multiple AI governance frameworks into the Integrated AI Governance & Project-Management System for GenAI (IAGPM-GenAI). It provides mappings, templates, and quick-lookup tables for practitioners implementing governed AI systems.
IAGPM-GenAI synthesizes the following frameworks:
| Framework | Core Mechanism | Unique Capabilities | Integration Role |
|---|---|---|---|
| CPMAI+E | Six iterative phases | AI-specific methodology with go/no-go gates | Backbone; phases structure the lifecycle |
| NIST AI RMF 1.0 | Govern, Map, Measure, Manage | Structured risk categories and metrics | Cross-cutting risk management functions |
| ISO/IEC 42001 | Plan-Do-Check-Act | Certifiable management system | Continuous improvement wrapper |
| EU AI Act | Risk-based regulation | High-risk classification, penalties | Regulatory compliance triggers |
| Singapore Model AI Gov | Nine governance dimensions | GenAI-specific (hallucination, alignment) | Generative AI risk dimensions |
| PMI AI Governance Plan | Templates and checklists | Roles, readiness, tool inventory | Operational templates |
| The Fifth Discipline | Learning organization | Systems thinking, mental models | Cultural change and learning |
| CPMAI Phase | Govern | Map | Measure | Manage |
|---|---|---|---|---|
| Business Understanding | Establish policies, roles, risk appetite | Identify stakeholders and context | Define risk tolerance and success metrics | N/A |
| Data Understanding | Ensure data-governance policies apply | Map data sources, provenance, stakeholders | Assess data quality risks and biases | Document remediation plans |
| Data Preparation | Approve privacy techniques and labeling policies | N/A | Evaluate data risks after cleaning | Approve augmentation; document data-cards |
| Model Development | Approve model experimentation; ensure accountability | Validate context alignment | Measure model risks and performance | Oversee model management and documentation |
| Model Evaluation | Conduct Go/No-Go review | Check if context assumptions still hold | Compute final risk scores | Plan risk responses and incident protocols |
| Operationalization | Maintain policies for deployment, monitoring, decommissioning | Update context maps as environments change | Monitor risk metrics and model performance | Manage incidents and retraining cycles |
| Category | Description |
|---|---|
| Govern 1 | Policies & processes — Maintain risk management policies; inventory AI systems; plan decommissioning safely |
| Govern 2 | Accountability structures — Document roles & responsibilities; provide training; ensure leadership accountability |
| Govern 3 | Workforce diversity, equity & inclusion — Engage diverse teams for decision-making; define human-AI roles |
| Govern 4 | Culture & values — Foster safety-first mindset and document risks and impacts |
| Govern 5 | Engagement with AI actors — Collect external stakeholder feedback; integrate into design |
| Govern 6 | Third-party & supply chain — Address risks from third-party data/models; plan contingency for failures |
| Category | Metric | Threshold | Evidence | Block Behavior |
|---|---|---|---|---|
| Quality | pass@5 | ≥ 0.82 | Evaluation suite log | Block if below |
| Fairness | subgroup_delta | ≤ 0.05 | Bias report | Block if above |
| Safety | harmful_rate | ≤ 0.005 | Red-team report | Block if above |
| Privacy | re-ID risk | ≤ 0.001 | DPIA summary | Block if above |
| Drift | PSI | ≤ 0.2 | Monitoring logs | Trigger retraining alert |
1
Risk Score = Severity (0–10) × Likelihood (0–10)
| Risk Level | Score Range | Action |
|---|---|---|
| Low | 0–20 | Acceptable; monitor routinely |
| Moderate | 21–50 | Implement mitigation; monitor |
| High | 51–79 | Immediate mitigation; restrict use |
| Critical | 80–100 | Halt deployment; perform redesign |
| Template | Purpose | Source |
|---|---|---|
| Readiness Assessment Table | Measures organizational preparedness (0–3 scale) | PMI AI Governance Plan |
| Maturity Assessment | Defines AI integration maturity levels | PMI |
| Use Cases & Restrictions Table | Lists tasks AI can/cannot perform | PMI |
| Tool Inventory | Catalogs approved AI tools with training resources | PMI |
| Intake Process Description | Specifies submission, evaluation, pilot testing process | PMI |
| Monitoring Plan | Outlines performance and usage monitoring requirements | PMI |
| Risk Management Table | Records risks, severity/likelihood, mitigation | PMI |
| Data-card | Documents dataset details (source, fields, quality, privacy) | CPMAI + ISO 42001 |
| Model Card | Summarizes model purpose, metrics, limitations, bias assessments | NIST + AI community |
| # | Question | Rating (0–3) | Notes/Actions |
|---|---|---|---|
| 1 | How advanced are current project management processes? | ||
| 2 | Is there significant scope for AI to enhance these processes? | ||
| 3 | How would you rate the quality of data from past projects? | ||
| 4 | How open is the culture to embracing AI and its changes? | ||
| 5 | Do you possess the necessary in-house AI skills? | ||
| 6 | Are resources sufficient to upskill teams if needed? | ||
| 7 | Have you identified potential barriers to AI adoption? | ||
| 8 | Do you have strategies to address identified barriers? | ||
| 9 | Do you clearly understand where and how AI can deliver value? | ||
| 10 | Do you have the necessary data infrastructure? | ||
| 11 | Do you have a sufficient budget for AI integration? | ||
| 12 | Have you assessed potential risks and planned mitigation? |
Scoring: 0 = Not ready at all, 3 = Fully prepared
| Dimension | Metric | Target |
|---|---|---|
| Latency | p95 < 2s | ✅ |
| Availability | 99.5% | ✅ |
| Quality | ≥ 0.82 pass@5 | ✅ |
| Harmful Output Rate | ≤ 0.5% | ✅ |
| Privacy Incidents | 0 | ✅ |
| Gate | Status | Evidence |
|---|---|---|
| Documentation Complete (Model & System Cards) | ☐ | |
| Evaluation Pass (Quality/Fairness/Safety thresholds) | ☐ | |
| Compliance Check (DPIA/TRA approved) | ☐ | |
| Rollback Plan (Last-known-good snapshot verified) | ☐ | |
| Stakeholder Sign-off (Product Owner + Governance Lead) | ☐ |
| Trigger | Threshold | Action |
|---|---|---|
| Data drift (PSI) | > 0.2 | Retrain model |
| Quality drop | > 10% QoQ | Review dataset + features |
| Incident count | > 3 per month | Launch RCA review |
| Policy update | New law/standard | Re-assess controls |
1
2
3
4
5
1. Detect → Contain → Communicate
2. Classify: Data breach | Safety failure | Infra error
3. Open ticket + notify Risk Manager
4. Execute rollback if required
5. Complete RCA within 24h + update playbooks
| Industry | Key Regulations | Primary Considerations |
|---|---|---|
| Finance | EU AI Act, GLBA, Basel, SR 11-7 | Fair lending, anti-bias audits, explainability, model risk management |
| Life Sciences / Healthcare | EU AI Act, HIPAA/HITECH, FDA SaMD | Patient safety, data privacy, clinical validation, human oversight |
| Federal Contracting | FAR/DFARS, NIST SP 800-53, agency AI policies | Supply-chain security, auditability, fairness, procurement compliance |
| Technology | EU AI Act, IP law, open-source licenses | Model transparency, responsible use of OSS, rapid iteration with governance |
| Endpoint | Purpose | Inputs |
|---|---|---|
/projects/initiate |
Creates project with governance metadata | Objectives, risk appetite, ethical principles, stakeholder list |
/data/inventory |
Registers data sources with metadata | Provenance, quality, bias scores |
/models/register |
Logs models with algorithm type and model card | Training data links, risk score |
/risks/report |
Submits risk entry | Description, severity, likelihood, mitigation plan |
/incidents/report |
Reports incidents | Classification, impact, response |
/monitoring/metrics |
Ingests performance and risk metrics | Metric values, timestamps |
/governance/review |
Records review outcomes | Policy updates, decisions |