Algorithmic bias is the systematic tendency of an AI system to produce unfair or discriminatory results due to flawed data, design or implementation.
Algorithmic bias occurs when an AI system reproduces, amplifies or introduces discriminatory patterns. It can originate from unrepresentative training data, proxy variables correlating with protected attributes, or design decisions favoring certain groups. ISO 42001 and ISO 23894 address its management as a critical AI risk.
No. Most is unintentional and emerges from historical data reflecting existing inequalities. That is why evaluating training data before deployment is critical.
Through fairness metrics (demographic parity, equal opportunity, calibration), independent third-party model audits and production outcome monitoring.
Need an assessment in this area?