Spangler Blog Post 3
In her paper Are Algorithms Value-Free?, Johnson underlines
how our idealization of the objective truth and its association with algorithms
and machine learning is misguided and potentially dangerous. Early in the paper, she draws an important distinction between inductive and deductive reasoning. Deductive
reasoning, as she defines it, ensures truth “because the conclusions of deductive
arguments are always in some sense contained in their premises,” (4). Inductive
reasoning is the extrapolation of premises to reveal truth upon certain assumptions,
the primary assumption being “the world will continue to remain uniform and
exhibit patterns we’ve seen the past and that are encoded in the premises,” (4).
She claims that this is an inherent weakness of inductive reasoning as it allows
room for error in a way that deductive reasoning does not. She goes on to
discuss the ways in which current algorithms utilize inductive reasoning, and
how these algorithms are susceptible to the same flaws as the “social and
ethical canons,” (8) of the scientific process they are attempting to emulate. By
claiming that they are based on value-free ideals, it places the assumption
drawn from algorithms on an objectivity pedestal.
Objective truth is\has been\will be sought by people
throughout human history, but time and time again it appears to elude us. I
would argue that is human existence is an innately subjective one, to the
extent the only kind of decision we can make is an inductive one. Johnson
claims that “there’s nothing logically at odds with the world becoming
drastically different,” (4). However, when she refers to ‘the world’ I believe it
implies the human world, because it is that world through which are fed
information and rationalize our decisions upon the information that is given to
us. The physical composition of reality as we know it, does not shift
drastically. Our perception of our environment changes as I receive new knowledge,
but the objective ‘world’ remains the same. Therefore, we cannot hope to obtain
a picture of the objective world that will satisfy everyone.
If algorithms are not seen to be heralds of objectivity, the
moral weight that their inductive predictions carry is lowered to the same
level as that of the programmer. Johnson uses COMPAS as an example of algorithms perpetuating
the inductive biases of human beings. Personally, I think the attempt to apply
this algorithmic objectivity to what is an inherently moral issue (whether it
is likely for someone to re-offend) is counterproductive to the goal of justice.
The existence of a statistic that claims to be objective will have effects on
the decision making of those who encounter it, even if they claim to not take
it into account in the decision-making process. Thus, the racial prejudices of
society that leak into the algorithm are unavoidable in the same way that those
prejudices leak into individuals.
Inductive risk, as it is assessed in the paper, cannot be
entirely dealt away with. However, I do not believe it undermines inductive
reasoning entirely. If the assumptions upon which an inductive decision is made
can be cataloged, we can retroactively attempt to understand the conclusion
through the lens of its assumptions, and not as absolute truth. In doing so,
we can improve inductive reasoning such that it accounts for all truths. For
example, if the COMPAS system took in the assumption that “it’s already disproportionately
black in America,” (15) and other biases then it might yield a more accurate
result. Alternatively, if judges took into account the lack of this assumption
in COMPAS’s inductive reasoning, they can allow the assumption that the data may
not be accounting for and/or is ignoring known truths, and then actively take
that into their inductive assumption.
Comments