Kim blog post 3

In "Are Algorithms Value-Free?" Johnson makes the argument that the inductive processes used in machine learning algorithms are based on assumption and as such their usage is value-laden. She argues that there is a difference between induction and deduction. Deductive arguments which are "contained in their premises" are true if their "premises are true." However, inductions are based on their "probable support" and as such always leave room for wrongness, not matter how small that probability may be. She focuses on the particular problems that arise in which people try to draw conclusions based on limited data. 

To elaborate, she uses "the argument from demarcation." She takes the contrast Longino makes between "simplicity and ontological heterogeneity." To make her argument, she gives the example of Ambien. She points to the mistake made in discerning the difference between the metabolism of men and women and the effects that this mistake had on women attempting to drive after taking an overdose of the drug. She shows the problems that arise in which "allegiance to simplicity" of men getting prioritized results in "socio-political values on which hierarchal relations are formed."


Another argument she uses is "the argument from inductive risk." She elaborates how there are different threshold of confidence necessary for two hypothetical scenarios: the risk of producing defective buckles for seat belts and pant belts. The risk involving seat belt buckles is clearly higher and thus needs a higher degree of confidence. Similarly, programmers of machine learning algorithms not only bear the risk of getting things wrong, but weight of their wrongness is compounded by the ripple effects that being wrong "will have in communities in which their judgment is regarded as expertise." One such problem that arises in real life is the application of such a machine learning program: COMPAS in which the consequence of wrongness would be an incorrect induction on recidivism risk. She argues that this could influence on a judge's decision on "who should be granted bail" or who "they should allocate resources intended to prevent recidivism." 


Her argument makes sense, but it only seems to point to a flaw in the current justice system. If we conclude that algorithms, which are made based on a programmers decision, can not be value free, then how can we conclude that a judge's ruling will be value free at all. Both a judge and an algorithm carry the risk of being value laden. Is it possible for multiple algorithms based off of multiple programmers be less biased than a judge? Based of a data on multiple judges and algorithms, hypothetically is it possible that an algorithm is less value laden than a judge? 

Comments

Popular posts from this blog

Gero - Final Farewell Blog Post Fifteen

Mehra - Blog Post "Lucky Number 13"

Discussion Leader Sign Up