abm-sciin-2019-banner-resized-small.png

Computer Simulations of Virtue Ethics: Simplicity versus Complexity

Jeremiah Lasquety-Reyes

Though there have been some attempts to create agent-based models explicitly for ethical theories, there is currently no agent-based model for virtue ethics. My research project attempts to produce the first computer simulations for virtue ethics in order to explore the ethical theory in a new and precise way, especially with respect to the complexity of many individuals interacting at the same time and over long periods of time. I argue that agent-based modeling is an ideal technological instrument with which to do research on virtue ethics because there is a functional parallelism between the two. According to virtue ethics, a person can possess virtues or vices that lead to similar repeated behaviors of an ethical nature, while in agent-based modeling, an agent can possess properties and variables that result in more or less predictable behavior as the computer simulations is run.

However, how does one simulate virtue ethics? I suggest there’s a simple way and a complex way. The simple way uses only virtues and situations. The complex way requires physical, cognitive, emotional and social components added to these virtues and situations.

To illustrate the simple way, one can begin with the famous simulation, Sugarscape. In that simulation, agents are able to trade different resources between them. Let’s imagine that in this trade activity we introduce an opportunity to “cheat” or “steal” that can sometimes arise. This counts as a “situation.”

When this opening comes up, a just agent might have an 80% chance of declining this opportunity to cheat, while an unjust agent might only have a 20% chance of “resisting the temptation.” We can imagine that as the unjust agent experiences the thrill and reward of cheating, its vice of injustice strengthens, decreasing its justice level to 15%. On the other hand, the just agent also increases its justice when it is able to decline the cheating opportunity, perhaps to 75%.

In this simple mechanism, virtue is a probability variable that will be tapped in certain relevant situations, and which will subsequently increase or decrease depending on the action taken. This is the simplest way of modeling the virtues. This simple way can be made more abstract. Let us assume that an agent can find itself in a certain situation represented by a mathematical function, which we’ll call a situation-function. Each situation can be addressed with a virtue (or a set of several virtues) using a “gradient descent” algorithm. The greater the level of the agent’s virtue, the more number of steps the agent can do gradient descent on this function and the closer the agent would come to a “local minimum” of the situation-function. The closer the agent comes to a local minimum of the situation-function, the more it can be said that the agent has addressed the situation in a virtuous way. If it reaches the “global minimum”, then the agent can be said to have acted in the most perfect way possible in the situation. Conversely, we could talk about local and global maxima for vices.

The advantage of having such an abstract representation is that one can represent practically any situation in human life, such as buying a car with prudence, eating cake with temperance, etc. with the same situation-function. One does not need to hard-code situations in the program such as in the first example. Situation-functions, as mathematical functions, can also resemble each other in various degrees, which could represent how some situations in real life resemble each other (buying a car and buying a computer, for example).

Though one can easily create functioning computer simulations of virtue ethics using the above mechanisms, unfortunately, they do not fully capture what virtue ethics is. Because in traditional virtue ethics, a virtue is considered a virtue if it allows the higher rational part of the agent to control and direct the lower instinctive and emotional part which can often resist this control (Aquinas, 2010). Emotions with a “mind of their own” need to be simulated because virtues have the task of reining in or guiding these emotions. In addition, one should not ignore the social aspect of virtue, especially the influence of social groups on inculcating and spreading virtuous or vicious behavior. Virtues are first learned from others. Rewards and punishments from others based on conformism or non-conformism to the expected behavior of a group can determine which virtues flourish or not.

A complex way of simulating virtue ethics may be conceived which incorporates at least four major components: the physical, cognitive, emotional and social. This draws from the older suggestion of Urban and Schmidt for a PECS framework for agent-based models. PECS stands for “physical conditions, emotional state, cognitive capabilities, and social status” (Urban 2000, Schmidt 2000). This framework recommends that all four components be in place inside social simulations in order to produce agents that are more believable and realistic. Recent work by Joshua Epstein on AgentZero also endorses a similar idea and provides a template that simulates cognitive, emotional and social mechanisms operating together in an agent (Epstein 2013, Epstein and Chelen 2016). In order to illustrate how a complex way could work, I will go through a sample simulation of the cardinal virtue of temperance, a virtue which deals with moderation in matters of physical desire such as food, drink, and sex.

The simple way of simulating virtue ethics is easier to implement and understand but does not capture the depth of the ethical theory. The complex way is more difficult to implement and understand but more accurate with respect to the ethical theory. Should one opt for simplicity for the sake of understanding? Or should one be willing to introduce several layers of complexity in order to more accurately represent virtue ethics?

References:

  • Aquinas, T. (2010). Disputed Questions on Virtue (J. Hause & C. E. Murphy, Trans.). Indianapolis: Hackett.
  • Epstein, Joshua M. 2013. AgentZero: Toward Neurocognitive Foundations for Generative Social Science. Princeton: Princeton University Press.
  • Epstein, Joshua M., and Julia Chelen. 2016. “Advancing AgentZero.” In Complexity and Evolution: Toward a New Synthesis for Economics, edited by David S. Wilson and Alan Kirman, 299-318. Cambridge, MA: MIT Press.
  • Schmidt, Bernd. 2000. The Modelling of Human Behaviour. Ghent, Belgium: SCS-Europe BVBA.
  • Urban, Christoph. 2000. “PECS: A Reference Model for the Simulation of Multi-Agent Systems.” In Tools and Techniques for Social Science Simulation, edited by Ramzi Suleiman, Klaus G. Troitzsch and Nigel Gilbert, 83-114. Heidelberg: Physica-Verlag.

Author: Research Group for Non-Monotonic Logics and Formal Argumentation

Created: 2019-02-26 Tue 09:21

Validate