IAPP Certified Artificial Intelligence Governance Professional (AIGP) Exam Syllabus

AIGP PDF, AIGP Dumps, AIGP VCE, IAPP Certified Artificial Intelligence Governance Professional Questions PDF, IAPP Certified Artificial Intelligence Governance Professional VCE, IAPP Artificial Intelligence Governance Professional Dumps, IAPP Artificial Intelligence Governance Professional PDFUse this quick start guide to collect all the information about IAPP AIGP Certification exam. This study guide provides a list of objectives and resources that will help you prepare for items on the IAPP Certified Artificial Intelligence Governance Professional (AIGP) exam. The Sample Questions will help you identify the type and difficulty level of the questions and the Practice Exams will make you familiar with the format and environment of an exam. You should refer this guide carefully before attempting your actual IAPP Certified Artificial Intelligence Governance Professional (AIGP) certification exam.

The IAPP AIGP certification is mainly targeted to those candidates who want to build their career in AI Governance domain. The IAPP Certified Artificial Intelligence Governance Professional (AIGP) exam verifies that the candidate possesses the fundamental knowledge and proven skills in the area of IAPP Artificial Intelligence Governance Professional.

IAPP AIGP Exam Summary:

Exam Name IAPP Certified Artificial Intelligence Governance Professional (AIGP)
Exam Code AIGP
Exam Price First Time Member - $649
Non-Member - $799
Retake Member - $475
Non-Member - $625 (USD)
Duration 180 mins
Number of Questions 100
Passing Score 300 / 500
Books / Training AIGP Body of Knowledge and Exam Blueprint
AIGP Handbook
Schedule Exam Pearson VUE
Sample Questions IAPP AIGP Sample Questions
Practice Exam IAPP AIGP Certification Practice Exam

IAPP Artificial Intelligence Governance Professional Exam Syllabus Topics:

Topic Details

Understanding the Foundations of Artificial Intelligence

Understand the basic elements of AI and ML - Understand widely accepted definitions of AI and ML, and the basic logical-mathematical principles over which AI/ML models operate.
- Understand common elements of AI/ML definitions under new and emerging law:
  • Technology (engineered or machine-based system; or logic, knowledge, or learning algorithm).
  • Automation (elements of varying levels).
  • Role of humans (define objectives or provide data).
  • Output (content, predictions, recommendations, or decisions).

- Understand what it means that an AI system is a socio-technical system.
- Understand the need for cross-disciplinary collaboration (ensure UX, anthropology, sociology, linguistics experts are involved and valued).
- Knowledge of the OECD framework for the classification of AI systems.
- Understand the use cases and benefits of AI (recognition, event detection, forecasting, personalization, interaction support, goal-driven optimization, recommendation).

Understand the differences among types of AI systems - Understand the differences between strong/broad and weak/narrow AI.
- Understand the basics of machine learning and its training methods (supervised, unsupervised, semi-supervised, reinforcement).
- Understand deep learning, generative AI, multi-modal models, transformer models, and the major providers.
- Understand natural language processing: text as input and output.
- Understand the difference between robotics and robotic processing automation (RPA).
Understand the AI technology stack - Platforms and applications.
- Model types.
- Compute infrastructure: software and hardware (servers and chips).
Understand the history of AI and the evolution of data science - 1956 Dartmouth summer research project on AI.
- Summers, winters and key milestones.
- Understand how the current environment is fueled by exponential growth in computing infrastructure and tech megatrends (cloud, mobile, social, IOT, PETs, blockchain, computer vision, AR/VR, metaverse).

Understanding AI Impacts on People and Responsible AI Principles

Understand the core risks and harms posed by AI systems - Understand the potential harms to an individual (civil rights, economic opportunity, safety).
- Understand the potential harms to a group (discrimination towards sub-groups).
- Understand the potential harms to society (democratic process, public trust in governmental institutions, educational access, jobs redistribution).
- Understand the potential harms to a company or institution (reputational, cultural, economic, acceleration risks).
- Understand the potential harms to an ecosystem (natural resources, environment, supply chain).
Understand the characteristics of trustworthy AI systems - Understand what it means for an AI system to be ″human-centric.″
- Understand the characteristics of an accountable AI system (safe, secure and resilient, valid and reliable, fair).
- Understand what it means for an AI system to be transparent.
- Understand what it means for an AI system to be explainable.
- Understand what it means for an AI system to be privacy-enhanced.
Understand the similarities and differences among existing and emerging ethical guidance on AI - Understand how the ethical guidance is rooted in Fair Information Practices, European Court of Human Rights and Organization for Economic Cooperation and Development principles.
- OECD AI Principles; White House Office of Science and Technology Policy Blueprint for an AI Bill of Rights; High-level Expert Group AI; UNESCO Principles; Asilomar AI Principles; The Institute of Electrical and Electronics Engineers Initiative on Ethics of Autonomous and Intelligent Systems; CNIL AI Action Plan.

Understanding How Current Laws Apply to AI Systems

Understand the existing laws that interact with AI use - Know the laws that address unfair and deceptive practices.
- Know relevant non-discrimination laws (credit, employment, insurance, housing, etc.).
- Know relevant product safety laws.
- Know relevant IP law.
- Understand the basic requirements of the EU Digital Services Act (transparency of recommender systems).
- Know relevant privacy laws concerning the use of data.
Understanding key GDPR intersections - Understand automated decision making, data protection impact assessments, anonymization, and how they relate to AI systems.
- Understand the intersection between requirements for AI conformity assessments and DPIAs.
- Understand the requirements for human supervision of algorithmic systems.
- Understand an individual’s right to meaningful information about the logic of AI systems.
Understanding liability reform - Awareness of the reform of EU product liability law.
- Understand the basics of the AI Product Liability Directive.
- Awareness of U.S. federal agency involvement (EO14091).

Understanding the Existing and Emerging AI Laws and Standards

Understanding the requirements of the EU AI Act - Understand the classification framework of AI systems (prohibited, high-risk, limited risk, low risk).
- Understand requirements for high-risk systems and foundation models.
- Understand notification requirements (customers and national authorities).
- Understand the enforcement framework and penalties for noncompliance.
- Understand procedures for testing innovative AI and exemptions for research.
- Understand transparency requirements, i.e., registration database.
Understand other emerging global laws - Understand the key components of Canada’s Artificial Intelligence and Data Act (C-27).
- Understand the key components of U.S. state laws that govern the use of AI.
- Understand the Cyberspace Administration of China’s draft regulations on generative AI.
Understand the similarities and differences among the major risk management frameworks and standards - ISO 31000:2018 Risk Management – Guidelines.
- United States National Institute of Standards and Technology, AI Risk Management Framework (NIST AI RMF).
- European Union proposal for a regulation laying down harmonized rules on AI (EU AIA).
- Council of Europe Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERIA).
- IEEE 7000-21 Standard Model Process for Addressing Ethical Concerns during System Design
- ISO/IEC Guide 51 Safety aspects – guidelines for their inclusion in standards.
Singapore Model AI Governance Framework.

Understanding the AI Development Life Cycle

Understand the key steps in the AI system planning phase - Determine the business objectives and requirements.
- Determine the scope of the project.
- Determine the governance structure and responsibilities.
Understand the key steps in the AI system design phase - Implement a data strategy that includes:
  • Data gathering, wrangling, cleansing, labeling.
  • Applying PETs like anonymization, minimization, differential privacy, federated learning.

- Determine AI system architecture and model selection (choose the algorithm according to the desired level of accuracy and interpretability).

Understand the key steps in the AI system development phase - Build the model.
- Perform feature engineering.
- Perform model training.
- Perform model testing and validation.
Understand the key steps in the AI system implementation phase - Perform readiness assessments.
- Deploy the model into production.
- Monitor and validate the model.
- Maintain the model.

Implementing Responsible AI Governance and Risk Management

Ensure interoperability of AI risk management with other operational risk strategies - Ex. security risk, privacy risk, business risk.
Integrate AI governance principles into the company - Adopt a pro-innovation mindset.
- Ensure governance is risk-centric.
- Ensure planning and design is consensus-driven.
- Ensure team is outcome-focused.
- Adopt a non-prescriptive approach to allow for intelligent self-management.
- Ensure framework is law-, industry-, and technology-agnostic.
Establish an AI governance infrastructure - Determine if you are a developer, deployer (those that make an AI system available to third parties) or user; understand how responsibilities among companies that develop AI systems and those that use or deploy them differ; establish governance processes for all parties; establish framework for procuring and assessing AI software solutions.
- Establish and understand the roles and responsibilities of AI governance people and groups including, but not limited to, the chief privacy officer, the chief ethics officer, the office for responsible AI, the AI governance committee, the ethics board, architecture steering groups, AI project managers, etc.
- Advocate for AI governance support from senior leadership and tech teams by:
  • Understanding pressures on tech teams to build AI solutions quickly and efficiently.
  • Understanding how data science and model operations teams work.
  • Being able to influence behavioral and cultural change.

- Establish organizational risk strategy and tolerance.
- Develop central inventory of AI and ML applications and repository of algorithms.
- Develop responsible AI accountability policies and incentive structures.
- Understand AI regulatory requirements.
- Set common AI terms and taxonomy for the organization.
- Provide knowledge resources and training to the enterprise to foster a culture that continuously promotes ethical behavior.
- Determine AI maturity levels of business functions and address insufficiencies.
- Use and adapt existing privacy and data governance practices for AI management.
- Create policies to manage third party risk, to ensure end-to-end accountability.
- Understand differences in norms/expectations across countries.

Map, plan and scope the AI project - Define the business case and perform cost/benefit analysis where trade-offs are considered in the design of AI systems. Why AI/ML?
- Identify and classify internal/external risks and contributing factors (prohibitive, major, moderate).
- Construct a probability/severity harms matrix and a risk mitigation hierarchy.
- Perform an algorithmic impact assessment leveraging PIAs as a starting point and tailoring to AI process. Know when to perform and who to involve.
- Establish level of human involvement/oversight in AI decision making.
- Conduct a stakeholder engagement process that includes the following steps:
  • Evaluate stakeholder salience.
  • Include diversity of demographics, disciplines, experience, expertise and backgrounds.
  • Perform positionality exercise.
  • Determine level of engagement.
  • Establish engagement methods.
  • Identify AI actors during design, development, and deployment phases.
  • Create communication plans for regulators and consumers that reflect compliance/disclosure obligations for transparency and explainability (UI copy, FAQs, online documentation, model or system cards).

- Determine feasibility of optionality and redress.
- Chart data lineage and provenance, ensuring data is representative, accurate and unbiased. Use statistical sampling to identify data gaps.
- Solicit early and continuous feedback from those who may be most impacted by AI systems.
- Use test, evaluation, verification, validation (TEVV) process.
- Create preliminary analysis report on risk factor and proportionate management.

Test and validate the AI system during development - Evaluate the trustworthiness, validity, safety, security, privacy and fairness of the AI system using the following methods:
  • Use edge cases, unseen data, or potential malicious input to test the AI models.
  • Conduct repeatability assessments.
  • Complete model cards/fact sheets.
  • Create counterfactual explanations (CFEs).
  • Conduct adversarial testing and threat modeling to identify security threats.
  • Refer to OECD catalogue of tools and metrics for trustworthy AI.
  • Establish multiple layers of mitigation to stop system errors or failures at different levels or modules of the AI system.
  • Understand trade-offs among mitigation strategies.

- Apply key concepts of privacy-preserving machine learning and use privacy-enhancing technologies and privacy-preserving machine learning techniques to help with privacy protection in AI/ML systems.
- Understand why AI systems fail. Examples include: brittleness;hallucinations; embedded bias; catastrophic forgetting; uncertainty; false positives.
- Determine degree of remediability of adverse impacts.
- Conduct risk tracking to document how risks may change over time.
- Consider, and select among different deployment strategies.

Manage and monitor AI systems after deployment - Perform post-hoc testing to determine if AI system goals were achieved, while being aware of ″automation bias.″
- Prioritize, triage and respond to internal and external risks. Ensure processes are in place to deactivate or localize AI systems as necessary (e.g., due to regulatory requirements or performance issues).
- Continuously improve and maintain deployed systems by tuning and retraining with new data, human feedback, etc.
- Determine the need for challenger models to supplant the champion model.
- Version each model and connect them to the data sets they were trained with.
- Continuously monitor risks from third parties, including bad actors.
- Maintain and monitor communication plans and inform user when AI system updates its capabilities. Assess potential harms of publishing research derived from AI models.
- Conduct bug bashing and red teaming exercises.
- Forecast and reduce risks of secondary/unintended uses and downstream harm of AI models.

Contemplating Ongoing Issues and Concerns

Awareness of legal issues - How will a coherent tort liability framework be created to adapt to the unique circumstances of AI and allocate responsibility among developers, deployers and users?
- What are the challenges surrounding AI model and data licensing?
- Can we develop systems that respect IP rights?
Awareness of user concerns - How do we properly educate users about the functions and limitations of AI systems?
- How do we upskill and reskill the workforce to take full advantage of AI benefits?
- Can there be an opt-out for a non-AI alternative?
Awareness of AI auditing and accountability issues - How can we build a profession of certified third-party auditors globally – and consistent frameworks and standards for them?
- What are the markers/indicators that determine when an AI system should be subject to enhanced accountability, such as third-party audits (e.g., automated decision-making, sensitive data, others)?
- How do we enable companies to remain productive using automated checks for AI governance and associated ethical issues, while adapting this automation quickly to the evolving standards and technology?

To ensure success in IAPP Artificial Intelligence Governance Professional certification exam, we recommend authorized training course, practice test and hands-on experience to prepare for IAPP Certified Artificial Intelligence Governance Professional (AIGP) exam.

Rating: 5 / 5 (75 votes)