
The audit of 78 AI models in production across organizations in Argentina, Brazil, Colombia, and Mexico found that 64% present at least one type of unmanaged verifiable bias. The most frequent biases are: underrepresentation of rural populations in training data (51% of models with demographic data), disparity in credit approval rates by gender (up to 23 percentage points difference in financial models), and hiring selection results correlated with postal code as socioeconomic proxy (38% of HR models). 82% of audited organizations have no active fairness metric. Only 11% conducted an AI impact assessment per ISO 42001 Annex B. The framework developed establishes 6 fairness metrics adapted to the Latin American regulatory context, an audit protocol applicable without disrupting production, and sector-specific alert thresholds enabling continuous monitoring.



Normative framework
ISO/IEC 42001:2023 (Annex B — AI impact assessment); ISO/IEC 24027:2021 (bias in AI systems); NIST AI RMF 1.0 (MAP function); IEEE 7010-2020; EU AI Act (Art. 10 and Art. 9).
Research protocol
Technical evaluation of 80 AI models in production with fairness analysis and differential bias metrics.
This material is shared upon request. Email us and we'll reply with the report and its annexes.
LATAM fairness framework — 6 metrics
Bias audit protocol for production systems
Sector alert threshold matrix
Schedule an assessment and we'll turn data into concrete action.
Schedule assessment