Huang - Blog Post 3
In her paper “Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning,” Gabbrielle M. Johnson argues that machine learning is necessarily value-laden because it operates via inductive reasoning, which is value-laden. As such, the presupposed ideal for value-free decision making should not be the standard we aim to achieve for machine learning to make objective decisions. She argues that all inductive reasoning is value-laden because in order to draw conclusions, they necessitate “canons of inductive inference,” which are non-evidentiary ways to limit the hypothesis to prevent underdetermination. A hypothesis can never be proven 100 percent true, so researchers need to make a value judgment on how likely a hypothesis is to be true such that he could conclude that the evidence is “sufficient” enough.
Johnson raises a counter-argument against her case. When researchers test hypotheses, they are most often “assign probabilities to hypotheses with respect to a fixed set of evidence;” they do not actually make the decision of what to do with that confidence interval or significance test (Johnson 16). In response, Johnson cites Rudner who claims that assuming a confidence level still runs the risk of incorrectly hypothesizing because inductive reasoning inherently has risk. However, Johnson does not adequately respond to this argument for two reasons.
When researchers conclude with a given confidence interval, their conclusion is merely a reflection of how certain they are of their hypothesis being true. (For example, when giving a confidence interval, researchers mean that if they were to carry out this experiment 100 times, 95 of those experiments would fall within the parameters of the confidence interval.) A researcher’s conclusion, in and of itself, does not assume any risk because without causing following actions to take place, it is merely a conclusion that the researcher determines is likely true. First, even though the researcher would need to make a value judgment to decide at what confidence interval they will conclude with, the mathematics used to calculate that conclusion are objective. The degrees of freedom may be beyond the presented facts, but they are derived from a simple calculation. Second, even if a confidence interval does necessarily come from a scientist’s value-laden decisions, they are still merely a reflection of the scientist’s thoughts but do not inherently hold inductive risk. Research conclusions are only significant so far as they influence decisions. Ultimately, decision-makers still need to choose whether to act on a given hypothesis or not. The scientists’ responsibility ends where he draws conclusions, and the decision-maker picks up responsibility immediately after. This raises some questions. To what extent is someone responsible for influencing decisions? Can you force anyone to believe something and act on it?
Comments