Truth and Politics: A Political Epistemology Conference

Our work on “What is the Impact of MySide Bias on Scientific Debates?” (Louise Dupuis, Matteo Michelini, AnneMarie Borg, Gabriella Pigozzi,Juliette Rouchier, Dunja Šešelja, and Christian Straßer) will be presented at the Truth and Politics: A Political Epistemology Conference in Bamberg.

Mercier and Sperber defend the idea, put forward by Perkins and Salomon (1989), that human reasoning is influenced by myside bias (also known as the confirmation bias) – a tendency to prioritize the search and generation of arguments that support one’s views over arguments that undermine it (Mercier and Sperber, 2017). Mercier and Heintz (2014) argue that myside bias also impacts the reasoning of scientists. Yet, they claim its effects are mitigated by two factors: first, a large number of shared beliefs among scientists and second, the importance of achieving and maintaining one’s reputation. Since the myside bias may have the effect of decreasing the quality of the debate, sharing beliefs about what is a good argument may help scientists identify poor ones, while valuing reputation may push them to carefully assess their arguments before publishing them. Indeed, Mercier et al. (2016) suggest that the myside bias, if kept under control through these two means, may have a positive impact on scientific inquiry, as it could generate an efficient division of labour among scientists without lowering the quality of inquiry. While this view stands in sharp contrast to the common take on confirmation bias as harmful for truth- tracking communities (Anderson, 2004; Brown, 2013; Douglas, 2009; Longino, 2002), it has recently been argued that confirmation bias can be beneficial for group inquiry (Peters, 2020; Smart, 2018). This raises the question under which conditions (if any) myside plays such a positive role.

In this paper we aim to address this question by studying the effects of myside bias on the dynamics of scientific debates by means of an agent-based model. While we are focusing on scientific debates, the model could also be applied to other kinds of discussions in the context of knowledge acquisition. We examine the impact of myside bias under changes in two assumptions: the degree to which beliefs are shared and to which reputation is valued in the scientific community. By doing so, our contribution is twofold. On one hand, we test the above hypothesis about the impact of shared beliefs and reputation on scientific inquiry among biased agents. On the other hand, in contrast to the traditional pessimistic view on myside bias, we offer a potential explanation of how a community can be epistemically efficient despite or even in virtue of being biased.

This is achieved by simulating a community of scientists debating over a certain topic, where every scientist has to choose between two general points of view (GPOV), composed by a certain number of theses. In line with the method of abstract argumentation (Dung, 1995), each thesis is represented as a node, in a so-called argumentation framework, i.e. a directed graph in which nodes abstractly represent arguments and edges the attacks between them. Scientific inquiry is modeled as an ongoing debate in which scientists look for and present arguments defeating and/or defending a certain thesis. Every turn, each scientist spends time critically investigating an argument by searching for defeaters. If she finds a defeating argument A, she reviews it and, unless she encounters problems (in terms of defeaters of A), she publishes it. Once a defeater is published, all agents may add it to their respective argumentation frameworks, unless they consider it irrelevant. Agents are assigned to ‘beliefs groups’, which represent groups of agents sharing the same beliefs: an agent is more likely to update on arguments communicated by agents from their respective belief groups. Finally, agents choose their preferred GPOV, namely the one with the highest number of accepted theses. Notably, every argument has an intrinsic (and objective) strength, unknown to the agents, that determines the probability of an agent to find a defeater for it.

If an agent has a myside bias she is more likely to find a defense of her GPOV and less likely to find a defense of the opposing GPOV. In order to assess the impact of reputation we vary the amount of time an agent spends reviewing arguments (taking for granted Mercier and Heintz’s point that a higher emphasis on reputation will lead to more cautious behavior). Similarly, we change the number of beliefs groups to represent various degrees of fragmentation in the given field. Finally, we assess the quality of a debate through three different indicators measured at every step of the simulation. First, we look at the ‘support difference’: the difference between the number of agents supporting the stronger GPOV (the one whose theses have higher intrinsic strength) and the number of those supporting the weak one. By looking at this measure, we assess the epistemic success of a GPOV in terms of the number of scientists endorsing it. Obviously, a community performs better the higher the support difference, given that the two GPOV are different in strengths. Second, we measure the quality of the accepted and attacked arguments in the community. The higher the number of high-quality arguments the community accepts the more successful a community it is (and conversely, the more low-quality arguments it rejects). Third, we measure the amount of time it takes for each community to settle on a GPOV.

Our initial results for a non-biased community are in line with the expectations: scientists are mostly able to identify the strongest GPOV and the sup- port difference is correlated with the difference between the strengths of the two GPOVs. In addition, our results indicate that if scientists choose to investigate only arguments against their GPOV or in favour of the opposite one, biased communities are better at identifying the strongest GPOV in the sense that the support difference is higher. On the other hand, if scientists investigate argu- ments randomly, without prioritizing those that criticize their current GPOV, a biased community performs worse. In that case, the support difference decreases as the intensity of the bias increases. Finally, we also discuss the impact of reputation and shared beliefs on the quality of inquiry.

References