In the current discourse on AI, journalists, marketers, and the burgeoning AI ethics community use the word 'bias' to refer to the deviation between the model predictions and a coveted perfect world. It’s measured not against existing data but against imaginary data of an ideal world we aspire to live in: a world free from prejudices, inequality and discrimination. A world where AI algorithms treat every person as equal no matter race, gender and background. Humanity is not there yet, but we want AI to be.
When AI predictions exhibit bias, this inevitably gives rise to indignation, fear, and oftentimes public outcry. Consider a scenario where a qualified job candidate is rejected merely because of their gender. When such injustice stems from human decision-making, it feels unfair. When the decision is produced by an AI algorithm, it feels unbearably unjust. The reason for this asymmetrical reaction is simple: humans can accept that other humans may be unfair, but they expect machines to always be objective and show no bias. Paradoxically, the human propensity for trusting machine objectivity more than the one of humans is called automation bias and is just one example of an uncomfortably long list of cognitive biases that characterize the human species.