Cargando
Preparando la información solicitada…
Cargando
Preparando la información solicitada…
ISO/IEC 42001:2023 establishes the requirements for AI management systems (AIMS). Readiness preparation covers policy design, risk assessment, human oversight, and certification audit preparation. Certification is issued by an independent accredited body.

"ISO 42001 is the first standard that turns AI governance into a certifiable management system."Fernando Arrieta — Lead Auditor ISO/IEC 42001
Software providers, SaaS platforms, and AI startups that need to demonstrate governance to stakeholders and regulators.
Banks, insurers, healthcare, and government that integrate AI models into operational decisions and need auditable control.
Organizations subject to the EU AI Act or operating with European partners that require compliance.
Survey of all AI systems in use: purpose, data, vendors, owners, and risk level.
Evaluation of the current state vs. ISO 42001 requirements and the EU AI Act.
AI policy, roles (RACI), human oversight, algorithmic risk management, and continuous improvement.
Operational controls, monitoring, incident management, vendor evaluation, and documentation.
Internal audit, management review, and complete preparation: findings closure and documentation ready for the independent certification body.
Registry of all AI systems: purpose, input data, vendors, owners, and risk classification.
Per-system assessment: bias, opacity, vendor dependency, data quality, and regulatory exposure.
AI policy, RACI roles, human oversight procedures, and documented transparency criteria.
ISO 42001 controls implementation tailored to your context: data, development, deployment, and monitoring.
Evolution roadmap: from the current state to full conformity, with measurable milestones and realistic timelines.
Internal audit, management review, and audit simulation. Documentation ready for the independent certification body.
ISO/IEC 42001:2023 is the first international standard for AI Management Systems (AIMS). It defines the requirements for organizations to use, develop, and provide AI responsibly, with controls, human oversight, and continuous improvement.
Any organization that uses, develops, or contracts AI systems and needs to demonstrate control to regulators, stakeholders, or contracting parties. It is especially relevant for companies subject to the EU AI Act (EU 2024/1689).
It depends on the number of AI systems and the organization's maturity. On average, implementation takes between 3 and 6 months. The preliminary diagnosis is delivered in 72 hours.
It is not mandatory, but it is recommended. ISO 27001 covers the information security that AI systems process. Many organizations implement both in an integrated manner to optimize effort.
Regulation (EU) 2024/1689 requires documentation, risk classification, and human oversight for AI systems. ISO 42001 provides the management framework to meet these requirements in a structured and auditable way.
Shadow AI is the unauthorized use of AI tools within the organization (for example, employees using ChatGPT without policies). ISO 42001 includes controls to inventory, assess, and govern all AI uses, including undeclared ones.
The cost depends on the scope, the number of AI systems, and the organization's maturity. As a reference: the preliminary diagnosis has an accessible fixed cost; the full implementation is quoted in stages based on complexity.
If your organization is evaluating ISO 42001 readiness, this is the channel to discuss scope and viability. All inquiries are handled under confidentiality.
The consulting and implementation services described on this site are provided independently. Certification audits and decisions are the exclusive responsibility of accredited certification bodies. In accordance with ISO/IEC 17021-1 §5.2, impartiality restrictions and cooling-off periods apply.