# In Praise of Contradiction: How to Help Groups Uncover What They Privately Believe

Thomas Boyer-Kassem and Cyrille Imbert

People sometimes misrepresent their opinions because others have expressed opposite views and public disagreement comes with various costs. For instance, one may be reluctant to leave first a party in order not to displease one’s host. Kuran and Sunstein argue that this phenomenon can lead to snowball effects, or reputational cascades (Kuran, 1995, Kuran and Sunstein, 1999), which can also affect experts in committees, such as juries (see Sunstein, 2007). How should epistemic groups deliberate to decrease such detrimental effects. This question has direct democratic applications for the organization of major institutions, especially when, for transparency reasons, vote is not secret at the end of deliberations (see e.g. the 2007 reform of advisory panels for the Food and Drug as analysed by Urfalino and Costa (2015)).

In this paper, we analyse a deliberately simple model of sequential deliberation in order to investigate how such reputational effects can be dampened by adopting suitable deliberative procedures.

$$\star\star\star$$

The model aims at formalizing the idea of opinion misrepresentation in the context of a simple multi-agent simulations, in which parameters cannot be left implicit. Existing models of opinion dynamics studied by formal epistemologists or computer scientists ignore the possibility of opinion misrepresentation (see e.g., Hegselmann and Krause, 2002, Zollman, 2008). Further, in order to analyze the effects of reputational cascades proper and distinguish them from those of information cascades, we propose a model without private opinion dynamics. This provides a baseline situation in which changes in opinions are purely reputational effects. Of course, any opinion dynamics model could be combined afterwards to our model, to study both phenomena together.

Opinions are represented in [0, 1]: n agents speak publicly one after the other, possibly for several table rounds. Each agent k has a private opinion, which remains fixed at all times, and a public opinion which is given by:

$\alpha\times[\mathrm{private~opinion}] + (1 – \alpha) \times [\mathrm{mean~ of~ expressed~opinions~during~the~last~table~round}]$

with alpha in [0, 1].

Thus, an agent’s expressed opinion is somewhere between her own private opinion and what has been publicly expressed before her. Agents do not fully express what they believe, because of external social pressure. With a parameter alpha close to 1, the agent takes little into account her fellows’ expressed opinions, and thus does not misrepresent her private opinion much. With alpha close to 0, the agent mainly follows the general trend and hardly takes into account her own opinion.

In practice, opinions are generally not expressed (if not understood) with an infinite precision, and groups usually need to settle on a binary answer (yes/no) or choose between a finite number of alternatives. Thus, we assume that the results of the above equations are projected on a finite number of possible options, e.g. 0.25 or 0.75 for 2 options. We then compare the result of the oral vote obtained in this way with the result that would have obtained if one had organized a secrete vote, in which agents would not have misrepresented their private views.

The model is investigated with computer simulations by assessing the discrepancy between oral and private votes. Preliminary results (XXXX, forthcoming) show that misrepresentational effects can be large, especially if the order of the expression of view is biased or manipulated. They are not easy to get rid of. The possibility of expressing fined grained opinions dampens misrepresentation but does not eliminate it and opens room for strategic voting. Making deliberations less abrasive (and thereby reducing the misrepresentation) is very useful but hard to obtain. Eventually, an efficient procedure is to make agents speak in a random order. Unfortunately, this procedure is often unpractical and still does not eliminate misrepresentation effects of groups of moderate size. In any case, misrepresentation effects still need several table rounds to dampen. This is a problem because deliberative time is costly and, as noted by James Madison (1787), agents do not like to publicly change their minds.

In this paper, we explore the following hypothesis: could the use of a contradictory procedure, by providing a fair defence of each view, somewhat solve the problem? A specific difficulty is that, in the present case, a contradictory procedure can only be based on what agents publicly express (since, by assumption private views are … private). We analyse two such procedures. The first procedure organizes an alternate defence of each view by randomly selecting the first one ; we show it significantly dampens misrepresentation, except when some views are in the minority but still get a chance to carry the day because of misrepresentation. The second procedure improves on the first by randomly selecting the first speaker and then by organizing an alternate defence. By running simulations for a large range of parameters, we show that the second procedure is better than the first one, and still better than those studied previously in the literature, including the random procedure advocated in (XXXX, forthcoming).

$$\star\star\star$$

From a methodological point of view, these results highlights i) how deliberately idealized models can be used to provide sound results, especially when to isolate mechanisms and analyse the general effects of crucial parameters; ii) how a perfectly accurate description of agents is not needed, provided that the conclusions are shown to be robust and do not hinge on the results of particular states; iii) how baseline results, with unfavourable hypotheses, provide a strong way to defend general claims.

## References

• Hegselmann, Rainer and Krause, Ulrich (2002), “Opinion dynamics and bounded confidence: models, analysis and simulation”, JASS, 5 (3).
• Kuran, Timur (1995), Private Truth, Public Lies, HUP.-
• Kuran, Timur and Sunstein, Cass, 1999, “Availability Cascades and Risk Regulation”, Stanford La- w Review, 51, 4.
• Sunstein, Cass, 2003, Why Societies Need Dissent, CUP.-
• Urfalino, Philippe and Pascaline Costa (2015), “Secret-Public Voting in FDA Advisory Committees- ,“ in Secrecy and Publicity in Votes and Debates, ed. Jon Elster, CUP.
• XXXX, forthcoming, “Improving deliberations by reducing misrepresentation effects“, Episteme.-
• Zollman, Kevin, 2008, “Social Structure and the Effects of Conformity”, Synthese 172(3):317-340-

Created: 2019-02-26 Tue 09:11

Validate