Fish- Blog Post 3
In “Are Algorithms Value-Free?” Gabrielle Johnson explains why and how algorithms, the very things that we are taught are the most objective decision-makers, are actually never value-free. To assume they are is not only dangerous but serves to perpetuate the injustices and inequalities that dominate society. She points specifically to COMPAS, a program that helps judges assess recidivism rates (Johnson 14). The problem found with this program is that “the program was almost twice as likely to falsely label black defendants as future criminals than white defendants, while often mislabeling white defendants as low risk at a higher rate than black defendants” (Johnson 14). While Johnson accepts that there can be “no algorithm for building algorithms,” she rightly rejects the notion that we can avoid inserting our values into these programs.
This reminds us of the ideas Cheryl Harris presents in “Whiteness as Property.” In her analysis of how whiteness as property was protected under the law, she explains how pseudo-sciences like eugenics became “embraced in legal doctrine as ‘objective fact’” (Harris 1738). The amount of Black or white blood in one’s body would determine their legal status, as science was widely accepted to be objective not only by the general population but under the law. Even today, we believe the science we are taught more often than we disagree with it or question the biases inherent in it.
The same can be said, especially in the modern-day, for algorithms. The average person, and even the government itself, accept what they are told about algorithms more often than they question it. So, Johnson points attention to an issue that few would even think to be an issue in the first place. This shows the innate danger of the “objective truths” that the majority of the world is taught from the day they are born to be trustworthy, objective, and fair. Through Harris’s account of whiteness as property we can see exactly how far these “objective” doctrines can go when it comes to inequality and with programs like COMPAS that Johnson highlights, it is easy to see how, even incidentally, these same patterns are emerging with “objective, value-free” programs being used as the vehicle to drive inequality. Because of this, Johnson proposes some sense of a solution. She cites Rudner, who argues that by acknowledging that “ethical values have a legitimate and necessary role to play in guiding scientific inference,” we can at least be held accountable for the values we are inserting into algorithms (Johnson 12). There is no way to remove our biases and values, but at the very least knowing what values are being inserted into algorithms can help improve morality and equality in justifications.
Comments