Momme von Sydow, MCMP, LMU München
Christoph Mertens, Universität Erlangen
Ulrike Hahn, Birkbeck University London
Socially available information is an indispensable source of knowledge, from expert judgment to witness testimony. However, learning from partially reliable sources is fraught with problems. In particular, it is difficult to simultaneously learn about some proposition of interest and the reliability of a particular source providing multiple pieces of evidence. Lacking certainty on either component, epistemic agents have to face such scenarios frequently. We focus on simple dichotomic hypotheses and Bayesian agents, discussing the issue of partially reliable, repeated evidence.
Outcome-based vs. belief-based strategies of updating epistemic trust
A rather unproblematic strategy of updating epistemic trust is an outcome-based strategy, using the observed frequencies of the co-occurrence of evidence and known outcomes. The standard tools of Bayesian inference seem applicable to represent rational degrees of trust. Such a strategy assumes that the agent can look behind a veil of phenomena (the evidence) and – at some point – has access to the truth (at least pragmatically, even if the thing-in-itself may never be known). It seems, for instance, that we can come to judge quite accurately the reliability of medical tests, such as a pregnancy test, only if we can square instances of test prediction with eventual outcomes that the results predicted. We sketch Beta-binomial updating for trust into reports given final outcomes. Note, however, that such an approach, although providing rational trust values, does not use trust for updating one’s belief based on new outcomes.
However, many actual situations seem to involve and require belief-based strategies of trust updating. In the contexts of communication, reading, witness reports (and, perhaps, even measurement) we often obtain reports that are only partially reliable, without being able of accessing their ultimate truth. According to a belief-based strategy, an agent nonetheless uses this evidence and revises not just her beliefs, but also the reliability of the respective source based on the match between evidence and current belief (Olsson, 2011, 2013; Bovens & Hartmann, 2003). For instance, if you tell me that the Earth is flat, this will make me think this is slightly more likely to be true, but it will also make me consider you less reliable than I had previously thought. This strategy seems intuitive, and there is some empirical evidence supporting its use (Collins, Hahn, von Gerber, & Olsson, 2018).
ABM simulation Bayesian models of belief-based updating
We present results of ABM simulations of networks of communicating and belief-updating agents, implementing the influential Bayesian model by Eric Olsson (2011, 2013) in Netlogo (Hahn, Merdes, & Sydow, 2018; Hahn, Sydow, & Merdes, 2019; Merdes et al, subm.; von Sydow, et al., in prep.). They estimate the reliability of their sources based on their current beliefs. Note that this model uses naive Bayesian agents only (and not optimal Bayesians) because, as in the real world, they do not have knowledge of the full network topology and information paths. Nonetheless these models may provide reasonable “Bayesian heuristics”. Our simulations assume a given basic truth for simple dichotomic (either true or false) hypothesis, H. In each run, agents repeatedly can receive probabilistic evidence (E/non-E) from the world or other agents or both. We varied the parameters of the model, particularly the reliability of reports, Pobj(Rel). The subjective reliabilities, Psub(Relij), for each source j of an agent i use continuous scales running from 0 (full anti-reliability) to 1 (full reliability). Psub(Relij) is actually the mean of a probability distribution over the reliability values, using a standard Beta distribution. Additionally, we vary the base-rates of H, Pobj(H), the probability of communication, c, given that an agent is above a belief threshold, and this very threshold, t. Finally, we explored the parameter space for prior beliefs Psub(H) and prior trust values Psub(Rel). Here we focus only on exemplary parameter values, such as Psub(H)=.5 and Psub(Rel)=.66.
The graphical results (e.g. surface plots over base rates and reliabilities) compare belief-based strategy and fixed-trust agents, as well as communicating agents and non-communicating agents, that is four kinds of strategies. This allows us to discuss the advantages or disadvantages from the involved factors (and their interactions).
Results I: A disaster
We first sketch some of our negative findings (Hahn et al., 2018, 2019, subm.). Obviously this Bayesian belief-based strategy – as such – cannot solve the ‘Cartesian demon’ problem of anti-reliable evidence and is essentially order-dependent. Moreover, in self-reinforcing cycles of epistemic failure this strategy sometimes produces wrong attractors, even for Pobj(Rel)>.5. Finally, the agents act surprisingly similarly to fixed-trust agents and non-communicating agents. Regarding their difference, the results (after 50 steps) show that updating mostly even slightly deteriorates the results, whereas communication only for a low objective reliability reduced the mean adequacy (squared differences to truth). Overall, the belief-based model seems to perform quite badly, even disastrously.
Results II: Light at the end of the tunnel
We then explore potential functions of this strategy, which – as mentioned – seems psychologically plausible. First, simulations that endow agents with some prior knowledge may change the situation. More generally, and with reference to the fundamental problem of induction, it has been argued that induction, perhaps paradoxically, always needs to be a knowledge-based endeavour (von Sydow, 2006). In any case, we investigated situations in which agents had knowledge e.g. about the base rates of H. Interestingly, the belief-based updaters in some areas of the parameter space perform better than the other strategies. Second, we show that belief-based updating can have advantages with regard to the velocity of updating beliefs and trust values. We discuss whether these findings can count as epistemic advantages.
Overall, the disastrous results for the investigated Bayesian model of belief-based updating show that they are at least problematic as normative a-priori-models. However, for the models we also saw some light at the end of the tunnel. This may open up new tracks of research (e.g., concerning the interaction of belief-based and outcome-based updating). It remains an open question, however, whether the light at the end of the tunnel will provide a full or partial rehabilitation of belief-based approaches or whether it is merely another oncoming train.
- Bovens, L. & Hartmann, S. (2003). Bayesian Epistemology. Oxford University Press.
- Collins, P. J., Hahn, U. von Gerber, Y., & Olsson, E. J. (2018). The Bi-directional Relationship between Source Characteristics and Message Content. Frontiers in Psychology, 9(18).
- Hahn, U., von Sydow, M. & Merdes, C. (2019, in press). How Communication Can Make Voters Choose Less Well. Topics in Cognitive Science. (Modelling Price, CogSci2018) doi: https://doi.org/10.1111/tops.12401
- Hahn, U., Merdes, C., & von Sydow, M. (2018). How Good is Your Evidence and How Would You Know? Topics in Cognitive Science. 10(4), 660-678. doi: https://doi.org/10.1111/tops.12374
- Olsson, E. J. (2011). A simulation approach to veritistic social epistemology. Episteme, 8(02), 127–143.
- Olsson, E. J. (2013). A Bayesian simulation model of group deliberation and polarization. In F. Zenker (Ed.), Bayesian Argumentation (pp. 113-133). Dortrecht, Heidelberg, New York, London: Springer.
- von Sydow, M. (2006). Towards a Flexible Bayesian and Deontic Logic of Testing Descriptive and Prescriptive Rules. Georg-August-Universität Göttingen.