Miller - Blog Post 3
In her article, “Are Algorithms Value-Free?” Gabrielle Johnson argues that algorithmic decision-making is far less objective than is generally assumed. She describes “Dragnet objectivity” in the context of machine learning, which is the (incorrect) assumption that algorithmic decision-making is objective because it learns from raw data and is thus devoid of the flaws inherent to human decision-making, such as “personal speculation or emotional interest” (3). Unfortunately, because relying on raw data alone leads to underdetermination, inductive conclusions cannot be drawn on “just the facts.” As Johnson explains, underdetermination can only be overcome with the creation of a set of assumptions, which she dubs “canons of inductive inference” (5).
Throughout her article, Johnson draws compelling parallels between scientific inquiry and machine learning. Another parallel that may be pertinent involves statutory interpretation. Johnson’s description of the role of canons evokes similarities with regards to the different modes of constitutional interpretation. As Johnson describes, “Induction requires the adoption of canons, but crucially, canons are not one-size-fits-all phenomena. There are many possible bridges one might adopt to traverse the gap between evidence and theory, and there seem to be no a priori grounds for preferring some bridges over others” (5). In terms of machine learning, computer programmers have the responsibility of deciding which canons of inference to adopt, which is inherently a value-laden decision. Along this same vein, judges must decide which canons of statutory interpretation they subscribe to, such as textualism, originalism, and judicial precedent.
Subscribers to textualism often claim that it is the most objective form of statutory interpretation, and would likely insist that it is the most “value-free” approach to interpretation. Applying Johnson’s logic, this seems to be false for two reasons. First, in even deciding which mode of interpretation to subscribe to, one must make a value-laden decision. In deciding that the plain meaning of the text (textualism) is more important than the meaning of the Constitution as understood by Founders (originalism), one is inherently making a value-laden decision. As Johnson describes, “proponents of the value-free ideal neglect to recognize how their choice to adopt some canons of inference over others is itself a value-laden judgement” (11). Second, there is no single interpretation of the text. Even in looking to the “plain meaning of the text” one can come up with a variety of different interpretations. To see this in practice, one must only look to the debate about the meaning of the 2nd Amendment that is raging across the nation among the populace and emerging in cases such as D.C. v. Heller. Just as in machine learning, when there is limited data (the text of the Constitution), underdetermination is inevitable, and value-laden interpretation seems rather unavoidable.
The judicial precedent mode of statutory interpretation also brings about interesting parallels to Hume’s “principle of uniformity of nature,” which draws upon past observed instances when considering unobserved instances. Similarly, in the judicial setting, stare decisis involves relying on past precedent to inform judgements. Not only is relying on precedent a value-laden decision, it is also a potentially problematic one. Johnson’s critique of Hume’s “principle of uniformity of nature” is certainly applicable to stare decisis. She writes that “if the warrant for our induction is grounded in the uniformity of a pattern in the world, and if the uniformity of that pattern is predicated on oppressive mechanisms of social reproduction, then the warrant for induction is founded on oppression.” If the judicial precedent mode of statutory interpretation involves an oppressive standard, such as in Dred Scott v. Sandford and Korematsu v. United States, such a canon is both value-laden and incredibly problematic.
Comments