Cargando
Preparando la información solicitada…
Cargando
Preparando la información solicitada…
The audit of 78 AI models in production across organizations in Argentina, Brazil, Colombia, and Mexico found that 64% present at least one type of unmanaged verifiable bias. The most frequent biases are: underrepresentation of rural populations in training data (51% of models with demographic data), disparity in credit approval rates by gender (up to 23 percentage points difference in financial models), and hiring selection results correlated with postal code as socioeconomic proxy (38% of HR models). 82% of audited organizations have no implemented fairness metric. Only 11% conducted an AI impact assessment per ISO 42001 Annex B. The framework developed establishes 6 fairness metrics adapted to the Latin American regulatory context, an audit protocol applicable without disrupting production, and sector-specific alert thresholds enabling continuous monitoring.
Central questions answered with verifiable data from the study.
Steps completed, sources consulted, and evidence collected during the study.
Normative and theoretical framework: ISO/IEC 42001:2023 (Annex B — AI impact assessment); ISO/IEC 24027:2021 (bias in AI systems); NIST AI RMF 1.0 (MAP function); IEEE 7010-2020; EU AI Act (Art. 10 and Art. 9).
Research that extends or contrasts the findings of this study.
Help circulate evidence-based governance.
If the question is institutional and has context, we can guide you on the next steps.