Algorithmic Bias: Building Trust in AI

By Darwin Li 

Corporations are increasingly turning to Artificial Intelligence (AI) systems and machine learning models to automate basic decision-making processes. To implement this process, computers follow a strict set of rules, or an algorithm, for their calculations. While these algorithms have widespread benefits, research has revealed some troubling harms in which the nature of algorithmic decision-making was found to reflect human biases. This issue, known as algorithmic bias, has been dominant in society, undermining equal opportunity and exacerbating oppression. 

Three Examples of Algorithmic Bias

  1. Hiring Programs

A lot of hiring programs have been exposed to accepting different groups of applicants disproportionately. Societal factors have been influencing the algorithm’s decision-making process and causing the unjustified labeling of different groups of people as favorable or unfavorable candidates. A lot of minorities are significantly disadvantaged, as algorithms tend to misinterpret the lack of candidates from a specific ethnic group or race as being unfavorable. 

 The most notable example is from Amazon, where a 2015 study found Amazon’s algorithm to be biased against women. The algorithm was subsequently ditched and the company lessened its reliance on AI. 

  1. Media Sources

In media sources, algorithms have an inconsistent prioritization of ideas, leading to unjust patterns in its algorithmically generated trending and recommended pages. Certain ideas are amplified more than others. Since media is designed to maximize viewer attention, its algorithms can redirect viewers in problematic ways. Content filtering has the common problem of over-recommending polarized topics. 

Researchers at the Illinois Institute of Technology conducted a study to see how algorithms recommending news stories create filter bubbles that influence readers’ political views. Among its findings was that viewers with more extreme views were shown less diverse content. 

  1. Recidivism Predictions

Perhaps the most notable example of algorithmic bias is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm used to predict the likelihood of a criminal reoffending. However, its choices of data and models paired with its poor implementation caused the algorithm to predict twice as many false positives for black offenders (45%) than their white counterparts (23%). The online news source ProPublica claimed, “it was no better than random, untrained people on the internet.”


The root cause of algorithmic bias is in the machine learning process, where discriminatory assumptions are made that skew results in favor of a certain group. In most cases, the influence of these assumptions is unintentional. However, its impact is far from negligible. In general, two steps need to be taken to combat algorithmic bias: detection and mitigation. Cases of algorithmic bias must first be identified; there must be results that unjustly favor one group over another. Strategies of detection include simulating algorithms with a broad range of users and deducing abnormal patterns within results. Mitigation, on the other hand, works to minimize bias by ensuring the data and models chosen are completely trustworthy, such as implementing a form of data governance. 

Technology and AI are becoming a larger and larger part of our society; it’s important that we take a stand to make sure our future is fair and trustworthy for all.