Reference
Governance glossary
Authoritative definitions of key terms in AI governance, information security, quality management, and ISO certification. Sourced from international standards and field experience across 25,000+ organizations.
- AIMS (AI Management System) →
- Management system for artificial intelligence as defined by ISO/IEC 42001. Establishes policies, objectives, processes, and controls for responsible AI development, deployment, and use within an organization.
- Algorithmic Risk →
- The potential for negative outcomes arising from the design, deployment, or operation of algorithmic systems. Includes bias, opacity, discrimination, dependency, and impact on fundamental rights. Managed under ISO/IEC 23894.
- Annex SL →
- The high-level structure (HLS) shared by all ISO management system standards. Provides a common framework of clauses enabling integration of multiple standards.
- Audit Trail →
- A chronological record of activities, transactions, or decisions that provides documentary evidence of operations. Essential for traceability and accountability in governance systems.
- COSO ERM →
- Enterprise Risk Management framework by the Committee of Sponsoring Organizations. Integrates strategy and performance with risk management across governance, culture, strategy, performance, and review.
- EU AI Act (Regulation 2024/1689) →
- The European Union regulation establishing harmonized rules on artificial intelligence. Classifies AI systems by risk level and sets requirements for high-risk AI systems.
- Gap Analysis →
- Systematic assessment of the difference between the current state of an organization's management system and the requirements of a target standard. Produces a prioritized remediation roadmap.
- Governance with Evidence →
- Fernando Arrieta's operational approach: every governance intervention produces auditable records, verified controls, and traceable decisions. Methodology verified across 25,000+ organizations.
- Human Oversight →
- Mechanisms ensuring that humans maintain meaningful control over AI systems. Includes human review, override capabilities, alert systems, and documented escalation procedures.
- ISMS (Information Security Management System) →
- A systematic approach to managing sensitive information, defined by ISO/IEC 27001. Includes people, processes, and technology through risk management.
- ISO/IEC 17021-1 →
- The standard specifying requirements for bodies providing audit and certification of management systems. Mandates independence between consulting and certification.
- ISO/IEC 23894 →
- Standard providing guidance on risk management for organizations using AI. Extends ISO 31000 with AI-specific risk factors: bias, opacity, autonomy, and evolving behavior.
- ISO/IEC 27001 →
- International standard for information security management systems (ISMS). Specifies requirements for establishing, implementing, maintaining, and continually improving information security.
- ISO/IEC 42001 →
- International standard for AI management systems (AIMS). Specifies requirements for establishing, implementing, maintaining, and improving an AI management system.
- ISO 9001 →
- International standard for quality management systems. Focuses on customer satisfaction, process optimization, risk-based thinking, and continuous improvement through the PDCA cycle.
- ISO 37001 →
- International standard for anti-bribery management systems. Specifies requirements for establishing, implementing, and improving an anti-bribery program.
- Non-Conformity →
- Non-fulfillment of a requirement in a management system. Classified as major (systemic failure) or minor (isolated deviation). Each requires root cause analysis and corrective action.
- PDCA (Plan-Do-Check-Act) →
- The continuous improvement cycle that underpins all ISO management systems. Plan: establish objectives. Do: implement processes. Check: monitor and measure. Act: take corrective actions.
- RACI Matrix →
- A responsibility assignment matrix defining who is Responsible, Accountable, Consulted, and Informed for each governance process.
- Shadow AI →
- Unauthorized use of artificial intelligence tools within an organization — tools like ChatGPT, Copilot, or image generators used without policy, access control, or governance. Our research (INV-01) found 73% of certified LATAM organizations have Shadow AI.
- Traceability →
- The ability to trace the history, application, or location of data, decisions, or processes through recorded identifications. Without traceability, there is no accountability, no governance, and no real protection.
Definitions are based on international standards (ISO/IEC 42001, 27001, 9001, 37001, 23894, 17021-1), COSO ERM, EU AI Act, and field experience across 25,000+ organizations. Certification is issued exclusively by accredited independent bodies.
Need help implementing governance in your organization?
Open channel for organizations seeking evidence-based governance frameworks.