Loading…
Loading…
Systematic and repeatable errors in an AI system's outputs that create unfair outcomes for particular groups — based on race, gender, age, disability, or other characteristics. Algorithmic bias typically originates in training data that underrepresents or misrepresents certain groups, but can also emerge from problem framing, feature selection, or optimization objectives. Regulators in the US (EEOC, FTC), EU (AI Act), and several states (Colorado, Illinois) have flagged algorithmic bias as a primary AI risk. Testing for disparate impact across demographic groups before deployment is the standard mitigation.
Why this matters for your team
Algorithmic bias has resulted in regulatory enforcement actions in hiring, credit, and housing. Before deploying any AI system that makes or influences decisions about people, run disparate impact tests across the demographic groups relevant to your use case.
An AI loan approval system approves applications from Black applicants at a 40% lower rate than white applicants with the same financial profile. This is algorithmic bias — likely caused by training on historical loan data reflecting past discriminatory practices.