Use this quick start guide to collect all the information about IAPP AIGP Certification exam. This study guide provides a list of objectives and resources that will help you prepare for items on the IAPP Certified Artificial Intelligence Governance Professional (AIGP) exam. The Sample Questions will help you identify the type and difficulty level of the questions and the Practice Exams will make you familiar with the format and environment of an exam. You should refer this guide carefully before attempting your actual IAPP Certified Artificial Intelligence Governance Professional (AIGP) certification exam.
The IAPP AIGP certification is mainly targeted to those candidates who want to build their career in AI Governance domain. The IAPP Certified Artificial Intelligence Governance Professional (AIGP) exam verifies that the candidate possesses the fundamental knowledge and proven skills in the area of IAPP Artificial Intelligence Governance Professional.
IAPP AIGP Exam Summary:
Exam Name | IAPP Certified Artificial Intelligence Governance Professional (AIGP) |
Exam Code | AIGP |
Exam Price |
First Time Member - $649 Non-Member - $799 Retake Member - $475 Non-Member - $625 (USD) |
Duration | 180 mins |
Number of Questions | 100 |
Passing Score | 300 / 500 |
Books / Training |
AIGP Body of Knowledge and Exam Blueprint AIGP Handbook |
Schedule Exam | Pearson VUE |
Sample Questions | IAPP AIGP Sample Questions |
Practice Exam | IAPP AIGP Certification Practice Exam |
IAPP Artificial Intelligence Governance Professional Exam Syllabus Topics:
Topic | Details |
---|---|
Understanding the Foundations of AI governance |
|
Understand what AI is and why it needs governance. |
- Know the generally accepted definitions and types of AI. - Identify the types of risks and harms posed by AI to individuals, groups, organizations and society (e.g., misalignment with objectives, ethics and bias risk, and complexity and scalability). - Identify the unique characteristics of AI that require a comprehensive approach to governance (e.g., complexity, opacity, autonomy, speed and scale, potential for harm or misuse, data dependency, and probabilistic versus deterministic outputs). - Identify and apply the common principles of responsible AI (e.g., fairness, safety and reliability, privacy and security, transparency and explainability, accountability and human-centricity). |
Establish and communicate organizational expectations for AI governance. |
- Define roles and responsibilities for AI governance stakeholders. - Establish cross-functional collaboration in the AI governance program (e.g., for efficacy and diversity of expertise and perspective). - Create and deliver a training and awareness program to all stakeholders on AI terminology, strategy and governance. - Differentiate approaches to AI governance based upon company size, maturity, industry, products and services, objectives and risk tolerance. - Identify differences among AI developers, deployers and users from a governance perspective (e.g., with respect to responsibilities, opportunities and needs). |
Establish policies and procedures to apply throughout the AI life cycle. |
- Create and implement policies to ensure oversight and accountability across all AI life cycle stages (e.g., use case assessment, risk management, ethics by design, data acquisition and use, model development, training and testing, deployment and monitoring, documentation and reporting and incident management). - Evaluate and update existing data privacy and security policies for AI. - Create and implement policies to manage third-party risk (e.g., procurement, supply chain and human resources). |
Understanding how laws, standards and frameworks apply to AI |
|
Understand how existing data privacy laws apply to AI. |
- Understand how notice, choice, consent, and purpose limitation requirements apply to AI. - Understand how data minimization and privacy by design requirements apply to AI. - Understand how obligations on data controllers apply to AI (e.g., regarding privacy impact assessments, use of third-party processors, cross-border data transfers, data subject rights, incident management, breach notification and record keeping). - Understand the requirements that apply to sensitive or special categories of data (e.g., biometrics). |
Understand how other types of existing laws apply to AI. |
- Understand how intellectual property laws apply to AI (e.g., prohibiting or limiting use of data for AI training). - Understand how non-discrimination laws apply to AI (e.g., in the employment, credit, lending, housing and insurance contexts). - Understand how consumer protection laws apply to AI (e.g., prohibiting unfair and deceptive acts or practices). - Understand how product liability laws apply to AI (e.g., prohibiting design or manufacturing defects). |
Understand the main elements of the EU AI Act. |
- Understand the risk classification framework for AI (i.e., prohibited AI, high-risk, limited-risk and minimal-risk) and what systems fall into each category. - Understand the key requirements for high-risk, limited-risk and minimal-risk AI including risk management, data governance, technical documentation, conformity assessment, record keeping, human oversight, transparency and notification, quality management (as applicable). - Understand the distinct requirements for general purpose AI models. - Understand the enforcement framework and penalties for non-compliance. - Understand the differences in requirements based on organizational context (e.g., providers, deployers, importers, and distributors). |
Understand the main industry standards and tools that apply to AI. |
- Understand the OECD principles, framework, policies and recommended practices for trustworthy AI. - Understand the NIST AI Risk Management Framework and Playbook (e.g., the core functions, categories and subcategories). - Understand the NIST ARIA program for methodologies, tools, metrics and measurements on AI safety. - Understand the core ISO AI standards (i.e., 22989 and 42001). |
Understanding how to govern AI development |
|
Govern the designing and building of the AI model. |
- Define the business context and use case of the AI model. - Perform or review an impact assessment on the AI model. - Identify laws that apply to the AI model. - Apply the policies, procedures, best practices and ethical considerations to designing and building the AI model (e.g., purpose of AI, requirements gathering, architecture and model selection, human oversight, data analysis, metric and threshold evaluation, stakeholder engagement and feedback and operational controls). - Identify and manage the internal and external risks and contributing factors related to designing and building the AI model (e.g., using probability/severity harms matrix, using a risk mitigation hierarchy, stakeholder mapping, use case evaluation, benchmarking, pre-deployment pilots and testing). - Document the designing and building process (e.g., to establish compliance and manage risks). |
Govern the collection and use of data in training and testing the AI model. |
- Establish and follow the requirements for data governance (e.g., assess and document lawful rights to collect and use data, and to assess data quality, quantity, integrity and fit-for-purpose). - Establish and document data lineage and provenance. - Plan and perform training and testing of the AI model (e.g., unit, integration, validation, performance, security, bias and interpretability). - Identify and manage issues and risks during training and testing of an the AI model. - Document the training and testing process (e.g., to validate results, establish compliance and manage risks). |
Govern the release, monitoring and maintenance of the AI model. |
- Assess readiness and prepare for release into production (e.g., creating the model card and satisfying conformity requirements). - Conduct continuous monitoring of the AI model and establish a regular schedule for maintenance, updates and retraining. - Conduct periodic activities to assess the AI model’s performance, reliability and safety (e.g., audits, red teaming, threat modeling and security testing). - Manage and document incidents, issues and risks. - Collaborate with cross-functional stakeholders to understand why incidents arise from AI models (e.g., brittleness, lack of robustness, lack of quality data, insufficient testing, and model or data drift). - Make public disclosures with to meet transparency obligations (e.g., technical documentation, instructions for use to deployers, and post-market monitoring plans). |
Understanding how to govern AI deployment and use |
|
Evaluate key factors and risks relevant to the decision to deploy the AI model. |
- Understand the context of the AI use case (e.g., business objectives, performance requirements, data availability, ethical considerations and workforce readiness). - Understand the differences in AI model types (e.g., classic vs generative, proprietary vs open source, small vs large, and language vs multimodal capabilities). - Understand the differences in AI deployment options (e.g., cloud vs on-premise vs edge, and using the AI model as-is or with fine-tuning, retrieval augmented generation, or other techniques to improve performance and fit). |
Perform key activities to assess the AI model. |
- Perform or review an impact assessment on the selected AI model. - Identify laws that apply to the AI model. - Identify and evaluate key terms and risks in the vendor or open source agreement. - Identify and understand issues that are unique to a company deploying its own proprietary AI model (e.g., increased obligations and higher potential liability). |
Govern the deployment and use of the AI model. |
- Apply the policies, procedures, best practices and ethical considerations to the deployment of an AI model (e.g., data governance, risk management, issue management, user training). - Conduct continuous monitoring of the AI model and establish a regular schedule for maintenance, updates and retraining. - Conduct periodic activities to assess the AI model’s performance, reliability and safety (e.g., audits, red teaming, threat modeling and security testing). - Document incidents, issues, risks and post-market monitoring plans. - Forecast and reduce risks of secondary or unintended uses and downstream harms. - Establish external communication plans. - Create and implement a policy and controls to deactivate or localize an AI model as necessary (e.g., due to regulatory requirements or performance issues). |
To ensure success in IAPP Artificial Intelligence Governance Professional certification exam, we recommend authorized training course, practice test and hands-on experience to prepare for IAPP Certified Artificial Intelligence Governance Professional (AIGP) exam.