The AIGP Is the First Real AI Governance Credential — Here Is How to Pass It in 2026
The IAPP AIGP (Artificial Intelligence Governance Professional) is the most recognized AI governance certification on the planet. Launched at the IAPP Global Privacy Summit on 4 April 2024, it has become the credential that privacy, compliance, risk, and AI leaders put on their LinkedIn profiles in 2026 — for a simple reason. The EU AI Act high-risk provisions hit enforcement in August 2026, NIST AI RMF adoption is accelerating, ISO/IEC 42001 AI management systems are expanding, and every large organization needs someone who can translate "we are going to use AI responsibly" into actual controls, policies, documentation, and monitoring.
This guide covers the AIGP Body of Knowledge v2.1 (effective 2 February 2026), the new 4-domain structure, verified 2026 pricing, the question breakdown, an 8-week study plan, and a management-mindset coaching approach that separates first-time passers from retakers. Every number in this guide was verified against iapp.org/certify/aigp, the IAPP Store, and the official AIGP BoK v2.1 PDF.
free AIGP practice questionsPractice questions with detailed explanations
AIGP Exam At-a-Glance (2026)
| Detail | Information |
|---|---|
| Certification Body | IAPP (International Association of Privacy Professionals) |
| Current Blueprint | AIGP Body of Knowledge v2.1 — effective 2 February 2026 |
| Exam Delivery | Pearson VUE — in-person test center OR OnVUE online remote proctoring |
| Questions | 100 multiple-choice (85 scored + 15 unscored pilot) |
| Duration | 2 hours 45 minutes + optional 15-minute break |
| Scoring | Scaled 100-500; passing score = 300 |
| Raw-to-pass estimate | ~65-80% of scored questions correct (per IAPP Candidate Handbook) |
| Scenario content | ~30% of questions are case-study/scenario format |
| Exam fee | $649 IAPP member / $799 non-member |
| Retake fee | $475 member / $625 non-member |
| IAPP membership | $295/year (includes CMF) |
| Certification Maintenance Fee (CMF) | $250 per 2-year term for non-members; included in IAPP membership |
| Prerequisites | None |
| Certification term | 2 years |
| Recertification | 20 CPE credits per 2-year cycle + CMF |
| Scheduling window | Within 1 year of exam purchase |
| Retake waiting period | 7 days |
| Languages | English |
All figures verified against the official IAPP AIGP page, the IAPP Store, the IAPP Certification FAQs, and AIGP BoK v2.1 (approved 9 September 2025, effective 2 February 2026).
FREE AIGP Prep: Start Here
Before spending $799 as a non-member on the exam fee — plus another $200-$1,200 on training — prove to yourself you can pass. The AIGP's scenario-based questions are famously tricky: three of four answer options often look reasonable, and only one aligns with the responsible AI governance framework IAPP expects. You cannot memorize your way through that. You have to practice.
Our free AIGP practice question bank is aligned to BoK v2.1, covers all 4 domains, and includes full rationales for every correct answer plus an explanation of why each distractor fails the IAPP frame.
Start AIGP practice questions nowPractice questions with detailed explanations
What AIGP Actually Is — And Why It Is Not a Technical AI Certification
The AIGP validates that you can govern AI — not build it. It is the AI equivalent of CISM or CIPM: a management-and-oversight credential for people who run programs, not people who ship models. If you are looking for a credential that proves you can train a transformer, tune a RAG pipeline, or engineer a LoRA adapter, AIGP is not it. Look at Google Cloud's ML Engineer, AWS's Machine Learning — Specialty, or vendor-specific credentials.
AIGP is what you earn when your job is to:
- Build and run an AI governance program
- Interpret the EU AI Act, NIST AI RMF, and ISO/IEC 42001 for your organization
- Perform AI impact assessments and risk assessments
- Write model cards, data sheets, and conformity documentation
- Establish human-oversight and incident-response procedures for deployed AI
- Manage third-party AI vendors and supply-chain risk
- Report AI risk and compliance posture to executives, boards, and regulators
Every question on the AIGP is, at its core, a question about what a responsible AI governance professional would do in a specific scenario. The "most technical answer" is usually wrong. The "most business-aligned, risk-based, role-aware, lifecycle-appropriate answer" is almost always right.
The 2026 AI Governance Market
Three forces have made 2026 the peak year yet to earn AIGP:
1. EU AI Act enforcement is here. High-risk AI provisions hit their enforcement deadline in August 2026. Every organization offering or using AI in the EU needs someone who understands provider vs deployer obligations, risk classification, conformity assessment, post-market monitoring, and GPAI rules.
2. NIST AI RMF is operational. NIST AI RMF 1.0 (released January 2023) and the companion Playbook have become the de facto US framework for AI risk management. Federal contractors, regulated industries, and enterprise procurement increasingly require evidence of NIST AI RMF alignment.
3. ISO/IEC 42001 certification is accelerating. The world's first AI management system standard (published December 2023) now has dozens of certified organizations and is rapidly becoming the equivalent of ISO 27001 for AI programs. BoK v2.1 expanded ISO 42001 coverage to match.
On top of that, the US regulatory patchwork has thickened: the Colorado AI Act (SB 24-205), NYC Local Law 144 on AEDTs, Illinois HB 3773, California SB 942 on AI transparency, and dozens of other state-level measures. Organizations need AI governance practitioners who can operate across all of these simultaneously.
Who Should Take AIGP
AIGP is the right credential for people who operate — or will soon operate — at the policy, risk, and program layer of AI. Sweet spot: 3-10 years of experience in privacy, compliance, risk, legal, security, audit, or AI-adjacent product roles.
| Role | Why AIGP Fits |
|---|---|
| AI Governance Manager / Lead | Canonical AIGP role. Literally in the name. |
| Privacy Professional (CIPP/CIPM/CIPT) adding AI | Natural extension — AIGP + CIPP qualifies for IAPP FIP. |
| Chief Privacy Officer / DPO / VP Privacy | Now expected to own AI governance alongside privacy. |
| Compliance / Regulatory Manager | EU AI Act, US state AI laws, GDPR-AI intersection. |
| GRC / Risk Manager | AI risk register, model inventories, third-party AI risk. |
| Information Security / CISO track | AI security is now part of the security portfolio. |
| Internal Audit / External Audit / Big 4 Advisory | AI assurance is the growth service line of 2026. |
| Legal Counsel (Privacy, Technology, Regulatory) | Advising on AI use, contracts, and compliance. |
| Product Manager on AI products | Understanding governance obligations from the inside. |
| Responsible AI Lead / AI Ethics Officer | The core audience IAPP designed AIGP for. |
| Government / Public Sector AI Officer | AI policy, procurement, and oversight roles. |
AIGP is not the right credential for: ML engineers focused on model development (pursue vendor/technical certs), pure academic researchers, or anyone who cannot yet articulate the difference between an AI provider and a deployer.
Eligibility: There Are None
Unlike CISM (5 years experience) or CISSP (5 years across 2 domains), AIGP has zero formal prerequisites. Anyone — literally anyone — can register, pay the fee, and sit the exam.
That said, IAPP recommends a minimum of 30 hours of study per its Certification Candidate Handbook. Realistically, most candidates who pass invest 60-100 hours. Candidates with no prior background in privacy, compliance, or AI-adjacent work should plan on 100-140 hours over 12-16 weeks.
The 4 AIGP Domains (BoK v2.1, Effective 2 February 2026)
IAPP restructured AIGP from 7 domains down to 4 when it released BoK v2.0.1 in February 2025. The current v2.1 (approved 9 September 2025, effective 2 February 2026) keeps the same 4-domain structure and refines specific performance indicators — it is a recalibration, not a reinvention.
| # | Domain | Scored Question Range | Percent of Scored (of 85) |
|---|---|---|---|
| I | Understanding the Foundations of AI Governance | 16-20 | ~19-24% |
| II | Understanding how Laws, Standards and Frameworks apply to AI | 19-23 | ~22-27% |
| III | Understanding how to Govern AI Development | 21-25 | ~25-29% |
| IV | Understanding how to Govern AI Deployment and Use | 21-25 | ~25-29% |
| Total (scored only) | 85 | 100% |
Domains III and IV together = 42-50 of 85 scored questions (roughly 50-59% of the scored exam). If you prioritize study time incorrectly, this is where you lose points.
What Changed in BoK v2.1
- Performance Indicator I.C.2 — expanded to include evaluating and updating data governance and intellectual property policies for AI
- Performance Indicator I.C.3 — expanded to include updated third-party risk documents, assessments, and contracts for AI
- III.A.3 and IV.B.2 — removed as redundant with Domain II
- ISO/IEC 42001 — coverage expanded in Domain II
- Agentic AI architectures — new emphasis in Domain IV governance
If your study materials predate February 2026, you are about 90% aligned — but verify the four changes above.
Domain I — Foundations of AI Governance (16-20 Questions)
Domain I is the conceptual bedrock. You cannot answer Domain II-IV questions correctly without fluency in the vocabulary and principles here.
Core Topics
| Topic | What You Must Know |
|---|---|
| AI Types and Definitions | OECD AI system definition; narrow AI vs AGI; supervised/unsupervised/reinforcement learning; generative AI; foundation models; agentic AI |
| AI Lifecycle | Plan → design → data → train → evaluate → deploy → monitor → retire; who owns what at each phase |
| Responsible AI Principles | Fairness, transparency, explainability, accountability, robustness, safety, privacy, human oversight, contestability |
| Ethical vs Responsible vs Trustworthy AI | The distinctions IAPP tests with scenario questions |
| AI Governance Program Structure | Charter, steering committee, roles (AI governance lead, data owner, model owner, deployer, procurement) |
| AI Value & Risk | Benefits (efficiency, personalization, accessibility) vs harms (bias, discrimination, misinformation, privacy loss, safety) |
| Data Governance for AI (updated v2.1) | Data lineage, quality, representativeness, consent, purpose, retention, IP and copyright of training data |
| Third-Party and Vendor AI Risk (updated v2.1) | Vendor due diligence, contracts, SOC/ISO evidence, continuous monitoring |
| AI Policies | AI acceptable-use policy, model development standards, data governance policy, third-party AI policy, incident policy |
| Accountability & RACI | Board, executive, AI governance lead, model owner, deployer, data protection officer (DPO), chief AI officer (CAIO) |
High-Yield: Ethical vs Responsible vs Trustworthy AI
This distinction is tested with scenario questions and catches candidates who think the three terms are synonyms. They are not.
- Ethical AI — aligned with moral principles and societal values; broader, philosophical framing
- Responsible AI — operational implementation of ethical principles (policies, controls, documentation, oversight)
- Trustworthy AI — the outcome: AI systems that are worthy of human trust because they are reliable, safe, transparent, accountable, fair, and privacy-respecting
IAPP uses these terms with precision. When a question asks "which of the following best describes a trustworthy AI outcome," the correct answer is the operational/measurable outcome — not the principle, not the policy, but the demonstrable result.
Governance Roles: Who Decides What
| Decision | Decider |
|---|---|
| Approve AI governance strategy | Board / Executive Committee |
| Set AI risk appetite | Board / Executive Committee |
| Own an individual AI model's risk | Business process owner / Model owner |
| Design the AI governance program | AI governance lead (working with CPO/CISO/CRO) |
| Approve a model for production | AI governance committee per policy (not the technical team alone) |
| Accept third-party AI vendor residual risk | Business process owner |
| Declare an AI incident | Incident commander per plan |
| Notify regulators of an AI incident | Legal + Executive per plan |
Domain II — Laws, Standards, and Frameworks Apply to AI (19-23 Questions)
Domain II is the regulatory literacy domain. You must be fluent in: existing data-privacy laws applied to AI, new AI-specific laws (EU AI Act above all), and the major frameworks (NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894, ISO/IEC 22989, OECD).
Core Topics
| Topic | What You Must Know |
|---|---|
| Privacy Laws Applied to AI | GDPR (notice, consent, purpose limitation, DPIA, Article 22 ADM, data minimization), CCPA/CPRA (automated decision-making regs), China PIPL, India DPDP, Brazil LGPD |
| Non-Discrimination Laws | US Title VII (employment), ECOA (credit), FHA (housing), EU directives; disparate impact vs disparate treatment |
| Consumer Protection | FTC Act Section 5, UDAP laws, unfair/deceptive AI practices |
| Product Liability | Design defects, manufacturing defects, failure to warn applied to AI; EU Product Liability Directive update |
| Intellectual Property | Training data copyright, generative AI output, AI authorship, trade secrets |
| EU AI Act | Risk classification (prohibited, high-risk, limited-risk, minimal-risk); provider vs deployer vs importer vs distributor obligations; GPAI and systemic risk rules; enforcement (AI Office, national authorities); penalties up to 7% global turnover |
| NIST AI RMF 1.0 | Govern, Map, Measure, Manage (the 4 core functions); NIST AI RMF Playbook; NIST ARIA program; crosswalks |
| ISO/IEC 42001:2023 | AI management system standard; Plan-Do-Check-Act; AI policy, objectives, risk treatment, Annex A controls; certification process |
| ISO/IEC 23894:2023 | AI risk management guidance |
| ISO/IEC 22989:2022 | AI concepts and terminology |
| OECD AI Principles (2019, updated 2024) | 5 values-based principles and 5 recommendations for governments |
| Council of Europe AI Convention | First binding international AI treaty (2024) |
| US State AI Laws | Colorado AI Act (SB 24-205), NYC Local Law 144 (AEDT), Illinois HB 3773, California SB 942 |
EU AI Act: The Most-Tested Law
The EU AI Act is the centerpiece of Domain II. IAPP consistently tests:
Risk Classification (4 tiers):
| Tier | Examples | Obligations |
|---|---|---|
| Prohibited | Social scoring by governments, exploitative manipulation, untargeted facial image scraping, emotion recognition in workplace/school, real-time biometric ID in public (with narrow LE exceptions) | Banned outright |
| High-Risk | AI used in employment, education, credit scoring, critical infrastructure, law enforcement, migration, justice administration; many product-safety AI systems | Risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity, conformity assessment, registration, post-market monitoring, serious incident reporting |
| Limited-Risk (Transparency) | Chatbots, deepfakes, emotion recognition, biometric categorization | Disclosure obligations |
| Minimal-Risk | AI-enabled video games, spam filters, most consumer AI | Voluntary codes of conduct |
Role Obligations (Provider vs Deployer):
- Providers develop or substantially modify AI systems and place them on the market. They bear the bulk of high-risk obligations.
- Deployers use AI systems under their own authority (not for personal, non-professional use). They must follow provider instructions, assign human oversight, monitor operation, and — for certain high-risk systems — conduct a fundamental rights impact assessment (FRIA).
- Importers and distributors have verification and due diligence obligations.
- GPAI providers (general-purpose AI model providers) have separate transparency and documentation obligations; providers of GPAI with systemic risk have additional model-evaluation and cyber obligations.
Memorize the fine structure (post-2024): up to €35M or 7% global turnover for prohibited AI violations; up to €15M or 3% for high-risk violations; up to €7.5M or 1% for supplying incorrect information.
NIST AI RMF 1.0: The Four Functions
| Function | What It Does |
|---|---|
| Govern | Cultivate culture of AI risk management; policies, roles, accountability |
| Map | Establish context, identify systems and use cases, characterize risk |
| Measure | Analyze, assess, benchmark, and monitor AI risk |
| Manage | Prioritize and act on risks based on impact; document and track |
NIST AI RMF is the US answer to the EU AI Act. Expect scenario questions that require you to map NIST functions to EU AI Act obligations.
ISO/IEC 42001:2023 (Emphasis Expanded in v2.1)
ISO/IEC 42001 is the AI management system standard — the "ISO 27001 of AI." Memorize:
- 10 clauses (clauses 4-10 are auditable): Context, Leadership, Planning, Support, Operation, Performance Evaluation, Improvement
- AI policy, AI objectives, AI risk assessment and treatment, statement of applicability
- Annex A controls covering AI policy, organizational roles, AI impact assessment, AI lifecycle, data for AI systems, information for interested parties, use of AI systems, third-party relationships
Requirement-to-Control Mapping (Build This)
The single highest-leverage Domain II study activity is building a requirement-to-control mapping table that aligns EU AI Act obligations with NIST AI RMF actions and ISO/IEC 42001 controls. When the exam presents a scenario, you can mentally run through the three frameworks at once and pick the best answer.
Domain III — Governing AI Development (21-25 Questions)
Domain III moves from theory to the practical work of governing AI systems as they are designed, built, trained, tested, and prepared for release.
Core Topics
| Topic | What You Must Know |
|---|---|
| AI Development Lifecycle | Problem framing → data acquisition → preprocessing → model selection → training → evaluation → validation → release readiness |
| Use-Case Governance | Intake, prioritization, go/no-go review boards, red-team and ethics review |
| AI Impact Assessments (AIIA) | Similar to DPIA; identify stakeholders, purposes, harms, mitigations; map to EU AI Act FRIA |
| Risk Assessment During Development | Bias risk, safety risk, security risk, IP risk, privacy risk |
| Data Governance for Training | Provenance, quality, representativeness, labeling, consent, licensing, synthetic data |
| Bias Detection and Mitigation | Pre-processing, in-processing, post-processing techniques; fairness metrics (demographic parity, equalized odds, equal opportunity) |
| Model Evaluation & Testing | Accuracy, precision, recall, F1, AUC; robustness testing, adversarial testing; red-teaming |
| Explainability & Interpretability | LIME, SHAP, counterfactuals; global vs local explanation; model cards |
| Security During Development | Supply chain (SBOM for ML), model theft, data poisoning, backdoors |
| Documentation Artifacts | Model cards, data sheets (datasheets for datasets), system cards, technical documentation (EU AI Act Annex IV), AIIA reports |
| Release Readiness & Conformity | Conformity assessment (EU AI Act), internal sign-off, production readiness checklists |
| Change Management | Re-validating models after retraining or fine-tuning |
The Go/No-Go Decision
AIGP repeatedly tests the pre-deployment go/no-go decision. The correct answer frame is always: Has every required condition been satisfied per the governance policy? Are the right artifacts complete (AIIA, model card, technical documentation, test evidence, conformity assessment where required)? Has the model owner signed off? Has the AI governance committee approved?
If any of those is missing, the correct action is almost always: halt deployment, document the gap, escalate per policy, remediate, re-review. Not: "deploy and fix post-market," and not: "overruled by the engineering lead."
Model Cards and Data Sheets (Heavily Tested)
- Model cards (Mitchell et al., Google) document intended use, evaluation data, metrics, ethical considerations, caveats, and limitations
- Datasheets for Datasets (Gebru et al.) document motivation, composition, collection process, preprocessing, uses, distribution, and maintenance
- System cards (Meta, OpenAI) extend model cards to the deployed system context
- EU AI Act technical documentation (Annex IV) is the formal regulatory equivalent for high-risk AI providers
Memorize: which artifact is expected when. Model cards = model-level. Data sheets = dataset-level. Annex IV = regulatory compliance for high-risk providers. AIIAs = impact-level before or during development.
Domain IV — Governing AI Deployment and Use (21-25 Questions)
Domain IV is the post-deployment operational domain — and where under-prepared candidates most commonly lose points. Many study Domains I-III thoroughly and treat Domain IV as an afterthought. That is a 25% of the scored exam mistake.
Core Topics
| Topic | What You Must Know |
|---|---|
| Deployment Decision Criteria | Readiness, fit-for-purpose, context-of-use, user training, oversight design |
| Deployment Environment Governance | Production environment controls, logging, versioning, rollback |
| Human Oversight Models | Human-in-the-loop (HITL), human-on-the-loop (HOTL), human-out-of-the-loop |
| Continuous Monitoring | Performance drift, data drift, concept drift, fairness metrics over time |
| Drift Detection & Response | Statistical tests, monitoring dashboards, alert thresholds, re-training triggers |
| Logging | EU AI Act Article 12 logging requirements for high-risk AI |
| Incident Response for AI | Detection, analysis, containment, notification, post-incident review |
| Serious Incident Reporting (EU AI Act) | 15-day reporting for serious incidents to national authorities; shorter windows for widespread/systemic events |
| Post-Market Monitoring (EU AI Act) | Plan, data collection, analysis, corrective action |
| Fundamental Rights Impact Assessment (FRIA) | Required for deployers of certain high-risk AI in the EU |
| Third-Party and Procurement Governance (updated v2.1) | Vendor oversight, contract clauses, audit rights, SOC/ISO evidence |
| User Training & Awareness | Role-based training for AI users, deployers, operators |
| Agentic AI (new in v2.1) | Tool-calling agents, multi-agent systems, oversight at scale |
| Retirement and Decommissioning | When and how to retire deployed AI; data and model preservation |
Human Oversight Design
The EU AI Act's human oversight requirements (Article 14 for high-risk AI) are repeatedly tested. Candidates must design oversight that enables the human overseer to:
- Fully understand the AI system's capacities and limitations
- Remain aware of automation bias
- Correctly interpret the AI system's output
- Decide not to use the AI system or override, reverse, or disregard its output
- Intervene or halt operation via a "stop button" or equivalent
AI Incident Lifecycle
A faithful rendering of the AI incident lifecycle (heavily tested):
- Detection — monitoring, user reports, third-party disclosure
- Analysis — validate, classify severity, scope impact
- Containment — pause, restrict use, route to manual review, fall back to non-AI process
- Eradication & Recovery — root cause, corrective action, re-validation
- Notification — internal escalation, executive, legal, regulators (per obligation), affected individuals (where required)
- Post-Incident Review — lessons learned, control improvements, plan updates
EU AI Act Serious Incident Reporting
For high-risk AI in the EU, providers must report serious incidents (defined in Article 3) to the national market surveillance authority within specified timeframes — generally 15 days from awareness, with shorter windows for widespread infringement or death-related events. Memorize: the reporting obligation belongs to the provider, not the deployer (though deployers have immediate cooperation duties).
Third-Party AI Governance (Updated v2.1)
BoK v2.1 strengthened emphasis on third-party and vendor AI governance:
- Vendor due diligence (SOC 2 Type II, ISO 27001, ISO 42001 evidence)
- Contract clauses (data use, retraining permissions, liability, notification, audit rights)
- Continuous vendor monitoring (periodic reassessment, incident notifications)
- Sub-processor and supply-chain visibility
- Exit and data-return provisions
Cross-Domain High-Yield Concepts
These concepts appear across 20-30% of AIGP questions regardless of labeled domain.
The "Governance Professional Frame"
When in doubt, pick the answer that:
- Respects role-based authority (provider vs deployer, business owner vs security, committee vs individual)
- Follows the documented policy or runbook rather than improvising
- Prioritizes risk-based, proportionate response over maximal or minimal
- Ensures transparency, documentation, and accountability rather than hiding or deferring
- Protects affected individuals (data subjects, users, third parties) as much as the organization
AI Risk Taxonomy
| Category | Examples |
|---|---|
| Bias / Fairness | Disparate impact, disparate treatment, representational harms |
| Privacy | Membership inference, model inversion, training data leakage |
| Safety | Physical harm (robotics, autonomous vehicles), psychological harm |
| Security | Adversarial examples, model theft, data poisoning, prompt injection |
| Accuracy / Robustness | Hallucination, drift, brittleness under distribution shift |
| Explainability | Black-box opacity for high-stakes decisions |
| Accountability | Unclear ownership, untraceable decisions, no appeal path |
| Environmental | Training and inference energy consumption |
| Societal / Democratic | Misinformation, manipulation, erosion of public discourse |
| IP / Copyright | Training data rights, output ownership, infringement risk |
| Labor / Workforce | Automation displacement, de-skilling, surveillance |
Metrics That Matter to the Board
Avoid the rookie mistake of reporting activity metrics to executives. Translate into business and risk outcomes:
| Avoid (Activity) | Prefer (Outcome) |
|---|---|
| "We reviewed 120 AI use cases" | "AI risk exposure reduced from High to Moderate; 14 high-risk use cases governed or rejected" |
| "We completed 30 AI training sessions" | "AI acceptable-use policy compliance rose from 62% to 94% measured quarterly" |
| "We monitored 8 deployed models" | "Drift-triggered human review prevented 3 material errors in Q2 with zero customer impact" |
| "We implemented NIST AI RMF" | "NIST AI RMF Govern function maturity rose from Partial to Repeatable; CMF-eligible evidence reduced audit findings 45%" |
Pass Rate & Difficulty Reality Check
IAPP does not publish official AIGP pass rates. Here is what we know from training providers, candidate surveys, and community data:
| Source | Reported First-Time Pass Rate |
|---|---|
| Training Camp AIGP Boot Camp (as advertised) | ~94% (committed 2-day boot camp students) |
| Prabh Nair / 22Academy course completers | ~80-85% |
| Reddit / LinkedIn AIGP self-reports | 65-75% |
| Industry average across all candidates | ~65-75% |
| Candidates using official IAPP practice exam + BoK v2.1 + scenario-based bank | 80-90% |
| Candidates who read BoK only, no scenario practice | 50-60% |
Plan on 60-100 hours of study. Do not schedule the exam until you are consistently scoring 75%+ on full-length timed practice exams aligned to BoK v2.1.
FREE AIGP Practice, Round 2
Practice is what separates first-time passers from retakers. The AIGP is a scenario-heavy exam and nothing prepares you for scenario questions except scenario questions.
Start practicing nowPractice questions with detailed explanations
8-Week AIGP Study Plan
This plan assumes 8-10 hours per week. Scale to 12 weeks if you are new to both AI and governance. Scale to 6 weeks if you are an experienced privacy/compliance professional already using NIST AI RMF or EU AI Act.
Week 0: Setup
- Download AIGP BoK v2.1 (free from iapp.org/certify/aigp) — print it, keep it next to your monitor
- Download the AIGP Candidate Handbook (free) — read end-to-end
- Download the IAPP AIGP Study Guide (free) — covers exam format and sample questions
- Decide: IAPP membership ($295) yes/no; register for exam slot 6-8 weeks out
- Set up free and paid practice materials; build a BoK v2.1 checklist
Week 1: Domain I (Foundations)
- Read BoK Domain I and all linked IAPP Key Terms for AI Governance
- Read NIST AI 100-1 (AI RMF 1.0) Introduction + Govern function
- Draft a one-page AI Governance Charter for a fictional organization
- Practice: 25 Domain I questions; analyze every wrong answer
Week 2: Domain I Completion + Start Domain II
- Finalize Domain I; own the Ethical vs Responsible vs Trustworthy AI distinction
- Begin EU AI Act (Articles 1-15 focus: definitions, prohibited, high-risk classification)
- Practice: 25 questions mixing Domain I and Domain II intro
Week 3: Domain II — EU AI Act Deep Dive
- EU AI Act provider vs deployer obligations; GPAI rules; enforcement and penalties
- US state AI laws survey (Colorado AI Act, NYC LL 144, Illinois HB 3773, CA SB 942)
- Privacy law intersections (GDPR Art. 22, CCPA/CPRA ADM, China PIPL automated decisions)
- Practice: 40 EU AI Act questions with rationale review
Week 4: Domain II — Frameworks + Standards
- NIST AI RMF 1.0 (4 functions), Playbook, ARIA
- ISO/IEC 42001 clauses and Annex A controls (expanded in v2.1)
- ISO/IEC 23894 (AI risk management) and ISO/IEC 22989 (terminology)
- OECD AI Principles; Council of Europe AI Convention
- Build your requirement-to-control mapping table
- Practice: 40 Domain II mixed questions
Week 5: Domain III — Governing AI Development
- AI development lifecycle, AIIA methodology
- Bias detection and mitigation (pre/in/post-processing); fairness metrics
- Model cards, data sheets for datasets, system cards, EU AI Act Annex IV technical doc
- Go/no-go scenarios: run a mock review for 3 fictional models
- Practice: 40 Domain III questions
Week 6: Domain IV — Governing AI Deployment and Use
- Human oversight models (HITL, HOTL, out-of-the-loop)
- Monitoring (drift, fairness, performance); alert thresholds
- EU AI Act Article 14 (oversight), Article 72 (post-market monitoring), serious incident reporting
- Agentic AI governance (new in v2.1)
- Third-party and procurement governance (expanded in v2.1)
- Draft an AI monitoring and incident runbook for a fictional deployed model
- Practice: 40 Domain IV questions
Week 7: Full Mock Exams + Weakness Targeting
- Take 2 full-length timed practice exams in 2h 45min blocks
- After each, spend 6 hours analyzing wrong answers by domain and by "why I got this wrong" (knowledge gap, wrong IAPP frame, misread)
- Re-study weak areas; revisit governance charter, requirement-control map, monitoring runbook
Week 8: Taper + Final Mock + Exam
- Light review only — no new material
- Day 1: final full mock exam at the same time of day you will sit the real exam; target 75%+
- Days 2-5: targeted flashcard review (EU AI Act risk classes, provider vs deployer, NIST AI RMF functions, ISO 42001 clauses, serious incident windows)
- Day 6: rest
- Day 7: exam day
Recommended Resources (Free-First)
Free
| Resource | Why |
|---|---|
| AIGP Body of Knowledge v2.1 (iapp.org) | THE primary source. Every exam question traces back here. |
| AIGP Candidate Handbook (iapp.org) | Testing policies, scoring, retake, proctoring rules |
| IAPP AIGP Study Guide (free PDF from iapp.org) | Format overview + sample questions from IAPP itself |
| IAPP Key Terms for AI Governance (iapp.org) | Official glossary — critical for vocabulary precision |
| NIST AI RMF 1.0 + Playbook + ARIA (nist.gov) | Core US framework — cited extensively on exam |
| EU AI Act full text (EUR-Lex) | Primary source — especially Articles 3, 5-15, 16-29, 50-55, 72-73 |
| ISO/IEC 42001:2023 summary | High-level structure sufficient for exam |
| OECD AI Principles (oecd.org) | 5 values-based principles, 5 recommendations |
| Oliver Patel Unofficial AIGP Resource Guide | 100+ curated readings aligned to BoK |
| OpenExamPrep Free AIGP Practice | BoK v2.1-aligned questions with AI tutor explanations — start here |
| IAPP Webinars & KnowledgeNet | Free community content; counts as CPE post-certification |
Paid (Only After Exhausting Free)
| Resource | What It Is | Who Should Buy |
|---|---|---|
| IAPP Official AIGP Online Training | IAPP-authored course covering the full BoK | Candidates who want the most BoK-aligned course; $995 member / $1,195 non-member |
| IAPP AIGP Practice Exam (Digital) | 100 official IAPP questions with rationales | Every candidate. $50 member / $60 non-member. Non-negotiable. |
| Training Camp AIGP Boot Camp | 2-day accelerated course with exam guarantee | Candidates who need structured, time-boxed prep |
| Privacy Bootcamp AIGP Course | Subscription e-learning with flashcards, practice | Self-paced learners who want breadth |
| 22Academy AIGP Practice Exams | Community-cited as the most BoK v2.1-accurate paid question bank | Candidates who want the highest-quality scenario practice |
| AI Career Pro AIGP Prep | Practitioner-led video course aligned to BoK v2.1 | Candidates who want instructor-led depth |
| LinkedIn Learning AIGP Cert Prep | Video course on LinkedIn | LinkedIn Premium subscribers |
The lean budget stack: Official BoK v2.1 + IAPP AIGP Practice Exam ($50 member) + one community course + free OpenExamPrep practice. Total: under $200 excluding exam and membership.
Exam-Day Strategy: The AIGP Pacing Game
The AIGP is 100 questions in 2h 45min (165 minutes) — plus an optional 15-minute break. That is roughly 1 minute 39 seconds per question. Scenario questions can easily consume 3 minutes; knowledge questions under 30 seconds. Use the flagging feature aggressively.
Pacing
- Minute 0-55: Answer questions 1-35. Flag anything over 90 seconds and move on.
- Minute 55-110: Answer questions 36-70.
- (Optional 15-min break)
- Minute 110-155: Answer questions 71-100.
- Minute 155-165: Revisit flagged questions. First instincts are correct ~75% of the time — change answers only with concrete reason.
The AIGP Question Archetypes
Every AIGP question falls into one of three archetypes. Identify which before you answer:
| Archetype | Signal | Strategy |
|---|---|---|
| Knowledge Check | "Which of the following is defined as..." | Pick the IAPP-canonical definition. Move fast. |
| Scenario / Best Answer | A 3-8 sentence scenario ending in "What is the BEST action for the AI governance professional?" | Identify the role, apply governance frame, eliminate technical-only and authority-overstepping answers |
| First / Next / Greatest | "What should the governance lead do FIRST?" / "Which presents the GREATEST risk?" | Read all options — all may be plausible. Pick based on the risk + role frame. |
The Elimination Engine
For hard questions, eliminate in this order:
- Eliminate technical-only answers. AIGP tests governance, not ML engineering.
- Eliminate authority-overstepping answers. The governance lead does not accept business risk unilaterally, does not bypass committee review, does not disclose publicly without counsel and executive sign-off.
- Eliminate absolutes. "Always," "never," "all," "immediately and without review" are almost always wrong.
- Eliminate answers that ignore role (provider vs deployer, controller vs processor). If the obligation belongs to a different role, the answer is wrong.
- Choose the answer a responsible AI governance professional would defend to executives, regulators, and affected individuals.
OnVUE (Remote Proctor) Tips
- Test webcam, microphone, and bandwidth 24 hours before
- Clear your desk entirely — empty walls behind you, no papers, no phone, no smartwatch
- Have government photo ID ready; complete check-in 30 minutes before start
- Use a wired connection if possible; close every application except the OnVUE client
- Restroom breaks are allowed but the clock keeps running during the 15-minute scheduled break only
Cost Breakdown, Retake Policy & Recertification
Total First-Year Cost (Member Path)
| Item | Cost |
|---|---|
| IAPP Annual Membership | $295 |
| AIGP Exam Fee | $649 |
| IAPP Official Practice Exam | $50 |
| (Optional) Official Online Training | $995 |
| Year 1 Minimum (exam + practice + membership) | $994 |
| Year 1 Full Stack (with training) | $1,989 |
Total First-Year Cost (Non-Member Path)
| Item | Cost |
|---|---|
| AIGP Exam Fee | $799 |
| IAPP Practice Exam (non-member price) | $60 |
| (Optional) Official Online Training | $1,195 |
| Year 1 Minimum | $859 |
| Year 1 Full Stack | $2,054 |
Membership Math
Membership costs $295/year and saves $150 on the exam plus $250 on the biennial CMF. It breaks even over two years and wins in year 3+ via discounted resources, lower training prices, conferences, and CPE access. If you plan to stack CIPP, CIPM, or CIPT with AIGP, membership is clearly worth it.
Retake Policy
- After a failed attempt, wait 7 days before retesting
- Retake fee: $475 member / $625 non-member
- Each purchased exam must be scheduled and taken within one year of purchase
Recertification (2-Year Cycles)
- 20 CPE credits per 2-year cycle aligned to AIGP BoK content
- Certification Maintenance Fee: $250 per term for non-members; included in IAPP membership
- All IAPP certifications share continuing-education rules; term alignment applies if you hold multiple
- CPE activities include IAPP webinars, conferences (Global Privacy Summit, AIGG Europe, AIGG Global), KnowledgeNet chapter events, IAPP training, approved university courses, writing articles, teaching, and committee service
Salary & Career: What an AIGP Actually Earns
IAPP's 2024 Privacy Professionals Salary Survey and 2026 job-market data converge on these US numbers:
| Role | AIGP-Certified Base Salary (US, 2026) |
|---|---|
| AI Governance Analyst | $85,000 - $115,000 |
| AI Risk Manager | $110,000 - $145,000 |
| AI Compliance Manager | $100,000 - $140,000 |
| AI Governance Manager / Lead | $130,000 - $170,000 |
| Responsible AI / AI Ethics Lead | $125,000 - $165,000 |
| Privacy Counsel with AI Governance | $165,000 - $205,000 |
| Director, AI Governance | $175,000 - $235,000 |
| Chief AI Officer / VP AI Governance | $200,000 - $320,000+ |
| vCAIO / Fractional AI Governance (consulting) | $250 - $600/hour |
The AIGP + CIPP Premium
IAPP's 2024 survey reports that professionals whose roles encompass both privacy and AI governance earn a median of $169,700 in base salary — roughly $18,000 more than AI-only roles ($151,800). Holding one IAPP certification correlates with ~13% higher salaries; holding multiple IAPP certifications correlates with ~27% higher salaries. AIGP + CIPP + (optionally) CIPM or CIPT is the most lucrative credential stack IAPP offers.
Career Paths
- Privacy Pro Adds AIGP: Privacy Manager → AI Governance Manager → Director of Privacy & AI. Fastest path to $170K+.
- Compliance Adds AIGP: Compliance Manager → AI Compliance Lead → Chief Compliance Officer with AI scope.
- Risk/Audit Adds AIGP: IT Risk Manager → AI Risk Lead → CRO track with AI emphasis.
- Legal Adds AIGP: Privacy Counsel → AI & Technology Counsel → General Counsel for AI-heavy organizations.
- Product Adds AIGP: Product Manager → Responsible AI Product Lead → Head of AI Product at enterprise.
- Consulting Track: AIGP + CIPP → Manager / Senior Manager at Big 4 or boutique advisory; standard at Manager level.
Common Mistakes That Tank First-Time Candidates
Mistake #1: Studying Outdated Materials
If your materials reference the 7-domain structure (pre-February 2025) or v2.0.1 (pre-February 2026), you are prepping for a different exam. BoK v2.1 is current as of 2 February 2026. Verify your training provider has updated.
Mistake #2: Reading Without Scenario Practice
Reading the BoK and EU AI Act cover-to-cover does NOT prepare you for scenario questions. The AIGP's scenario items are designed so three of four options look reasonable. Only structured scenario practice with full rationales builds the elimination reflex.
Mistake #3: Under-Studying Domains III and IV
Domains III (Development) and IV (Deployment/Use) total 42-50 of 85 scored questions — 50-59%. Candidates who over-invest in Domain I (Foundations) and Domain II (Laws/Frameworks) and skim the lifecycle domains routinely fail. Start Domain III by Week 5 of an 8-week plan.
Mistake #4: Memorizing Laws Without Understanding Roles
The EU AI Act is role-specific. A question about a company fine-tuning a foundation model and deploying it under its own brand has a different correct answer than a question about a company only using a vendor's AI system. Provider vs deployer vs importer vs distributor vs GPAI provider — memorize the obligation split.
Mistake #5: Treating AIGP Like a Technical Exam
The correct answer is almost never "build the most technically sophisticated AI safety guardrails." It is almost always "identify the risk, document the assessment, present options to the business and governance committee, and implement the governance-approved risk treatment." If you are picking answers like an ML engineer, you will fail.
Mistake #6: Ignoring Documentation Specifics
Model cards, data sheets for datasets, EU AI Act Annex IV technical documentation, AIIAs, FRIAs — each has a specific purpose and lifecycle stage. Questions that ask "which artifact is required at this point" have exact answers. Memorize the artifact-to-stage map.
Mistake #7: Skipping Domain IV Runbook Practice
Incident response for AI is tested heavily and candidates who have never drafted an AI monitoring and incident runbook miss repeatedly. Spend Week 6 building one for a fictional deployed system. Define KPIs, thresholds, alerts, escalation, notification, and post-incident review.
Mistake #8: Not Taking the IAPP Official Practice Exam
The IAPP Official AIGP Practice Exam ($50 member / $60 non-member) is 100 questions written by IAPP to match real exam depth and phrasing. It is the single most predictive resource of your real-exam readiness. Take it at the end of Week 7.
AIGP vs Adjacent Certifications
| Cert | Body | Focus | Experience | Best For |
|---|---|---|---|---|
| AIGP | IAPP | AI governance across lifecycle | None formal | AI governance leads, privacy + AI roles, compliance |
| CIPP/E | IAPP | EU GDPR and privacy law | None formal | EU privacy professionals |
| CIPP/US | IAPP | US federal and state privacy | None formal | US privacy professionals |
| CIPM | IAPP | Privacy program management | None formal | Privacy program managers |
| CIPT | IAPP | Privacy in technology (engineering) | None formal | Privacy engineers, product/security |
| ISO/IEC 42001 Lead Auditor / Lead Implementer | PECB/BSI/etc. | AI MS audit or implementation | Varies | AI MS program implementers and auditors |
| MIT/Stanford/Wharton AI Governance Programs | University | Executive education | Varies | Senior leaders wanting brand-name cert |
| Responsible AI Institute RAII Certifications | RAI Institute | Responsible AI management | Varies | Practitioners in RAII ecosystem |
AIGP vs CIPP/E or CIPP/US
Not competing. Complementary. CIPP teaches privacy law; AIGP teaches AI governance across the lifecycle. The highest-value stack in 2026 is AIGP + CIPP, which also qualifies holders for the IAPP Fellow of Information Privacy (FIP) designation.
AIGP vs ISO/IEC 42001 Lead Implementer
AIGP is a credential attached to a person. ISO/IEC 42001 Lead Implementer is a credential attached to a project deliverable (implementing an AI management system per ISO 42001). They are complementary: AIGP gives you the governance framework literacy; 42001 Lead Implementer certifies you to build a 42001-compliant AI MS in an organization. Many AI governance professionals hold both.
Stacking Strategy
- AIGP + CIPP/E: EU-focused AI governance + privacy. Highly valued in EU-headquartered companies.
- AIGP + CIPP/US + CIPM: US-centric AI + privacy + program management. Top US stack.
- AIGP + CIPT: AI governance + privacy engineering. Great for AI-heavy product orgs.
- AIGP + ISO 42001 Lead Implementer: Governance framework + implementation credential. Consulting stack.
- AIGP + CISM / CRISC: AI governance + security management / IT risk. For security-owned AI programs.
Your Next Steps After AIGP
Natural follow-ups:
- CIPP/E or CIPP/US — if you do not already hold one
- CIPM — for privacy program management depth
- CIPT — for privacy-in-technology engineering depth
- ISO/IEC 42001 Lead Implementer or Lead Auditor — for implementation/audit credentialing
- Responsible AI Institute certifications — for RAII ecosystem alignment
- CISM / CRISC — if you want to own AI security and IT risk
IAPP's Fellow of Information Privacy (FIP) designation requires AIGP (or CIPT, or CIPM) plus any CIPP. FIP is the IAPP's highest credential and signals elite cross-disciplinary expertise.
Final CTA: Start Practicing Today
The AIGP is pass-able with a clear roadmap. The candidates who fail almost always share one trait: they studied concepts but never practiced scenarios. You can fix that right now.
Start practicing nowPractice questions with detailed explanations
The 2026 AI governance job market has more openings than qualified candidates. AIGP is the fastest credential path into those openings. The only thing between you and an AI Governance Manager title is the 100-question exam — and a study plan that actually works.
Good luck. You can do this.
Official Sources
- IAPP AIGP program home: https://iapp.org/certify/aigp
- IAPP Store — AIGP Exam: https://store.iapp.org/aigp-exam/
- IAPP AIGP Training: https://iapp.org/train/aigp-training
- IAPP Certification FAQs: https://iapp.org/certify/faqs
- IAPP AIGP Body of Knowledge v2.1 (effective 2 February 2026) — approved 9 September 2025
- IAPP Certification Candidate Handbook — scoring, policies, conduct
- IAPP AIGP Study Guide (free PDF): via iapp.org/l/aigp-study-guide-request
- NIST AI Risk Management Framework 1.0: https://www.nist.gov/itl/ai-risk-management-framework
- EU AI Act (Regulation (EU) 2024/1689): https://eur-lex.europa.eu
- ISO/IEC 42001:2023 AI management system standard: https://www.iso.org/standard/81230.html
- ISO/IEC 23894:2023 AI risk management guidance: https://www.iso.org/standard/77304.html
- ISO/IEC 22989:2022 AI concepts and terminology: https://www.iso.org/standard/74296.html
- OECD AI Principles: https://oecd.ai/en/ai-principles
- Pearson VUE (exam delivery): https://www.pearsonvue.com/iapp
Information current as of April 2026. Always verify specific fees, dates, and eligibility details at iapp.org/certify/aigp before registering.