ivashkiv post 3

 Johnson argues that machine learning is not value-free, but is actually subjective. First, she points out that algorithms rely on inductive logic. Algorithms use raw sets of data to formulate predictions. Hume’s famous problem with inductive reasoning was that he could never be one-hundred percent sure that the sun would rise the next day. However, Johnson shows that this understand, my understanding before reading this paper, is misinformed, as “The justification of induction is contingent—it depends on the world being a certain way” (4). She parlays this point to assert that value-free algorithms are actually laden with values that only apply to seeing the world one way. She appropriates Feminist Theorist Longino to show that it is not possible to differentiate value-free epistemology from value-laden judgments, as a value-free judgment can only be justified inductively (10). Longino argues that ethical considerations should be considered when prescribing a set of values to use. She wants to use “empirical adequacy, novelty, ontological heterogeneity, complexity of interaction, applicability to human needs, and diffusion of power” (8). To cement this point, Johnson uses Rudner to show that ethical considerations are the most relevant points to consider.

This reminded me of Harris’s discussion about Affirmative Action. Harris wanted to implement a model of distributive justice that relied on ethical considerations to adjust for implicit bias and discrimination. Harris observed that the historical tendency to protect whiteness made the framework for employment or admission to colleges unfair, as it did not reflect the general concept of white privilege. In this way, Harris’s argument showed that current practices only reflected one way of looking at the world. Similarly, an epistemological value set only considers the world one way. Johnson’s general fear is that a supposed value-free system won’t be able to adjust as the world changes, and she ultimately argues for embracing a value-laden algorithm.

In one way, algorithms like COMPAS represent a value-laden development. This is because COMPAS attempts to stop using human bias in its decisions. Of course, this statement is controversial. The COMPAS algorithm is not public. ProPublica reported the shortcomings of COMPAS. However, COMPAS, and programs like it, are argued to be better predictive tools than human judges. Johnson mentions this briefly but concludes that “it is impossible to settle these disputes here” (15). I remember hearing about COMPAS, and that general consensus was that COMPAS was a better predictive tool for recidivism than human judges. I did a quick Google search, and the results are varied. However, if COMPAS is a better predictive tool, and it makes intuitive sense that an algorithm will eventually be able to outperform judges, then these algorithms represent a step forward in a utilitarian sense.

In the current state of affairs, judges make many mistakes when predicting recidivism with a specific tendency to discriminate against black people. We would like to think that these mistakes are not intentional but are a result of implicit bias. I argue this distinction does not matter if it is known that judges make these mistakes. An algorithm should eventually be a better predictor than human judges, but this program will still make mistakes. This directly relates to Affirmative Action. Affirmative Action attempts, like COMPASS, to adjust for human bias to make better decisions. Affirmative Action practices are not perfect. However, Affirmative Action, like COMPAS, represents a progressive step to better distribute justice. In this way, COMPAS should be considered a step forward from the problems with the value-free judgment system.



Comments

Paul Hurley said…
Forgot to include you as one of the people who invokes parallels with Harris in cool and interesting ways.

Popular posts from this blog

Gero - Final Farewell Blog Post Fifteen

Mehra - Blog Post "Lucky Number 13"

Discussion Leader Sign Up