January 2014
I have previously (and repeatedly) cited a paper by philosophers Allison Ross and Nafsika Athanassoulis that highlights the risk-taking nature of engineering practice and draws out some of the associated ethical implications. In two additional papers (“A Virtue Ethical Account of Making Decisions About Risk,” Journal of Risk Research, Vol. 13, No. 2, March 2010, pp. 217-230; “Risk and Virtue Ethics,” chapter 33 in Handbook of Risk Theory, Springer, 2012), the same authors discuss risk in a more general sense and argue convincingly that virtue ethics provides the most adequate approach for dealing with it.
Ross and Athanassoulis concede that “it would be convenient if there was a formula for making good and right decisions about whether, when, and what to risk.” This is essentially what the dominant modern ethical theories purport to offer: definitive guidance derived from universal principles, such as assumed duties and obligations (deontology) or assessment of anticipated outcomes (consequentialism). By contrast, virtue ethics recognizes that any truly substantive ethical inquiry will lead to “a complex, varied, and imprecise answer that cannot be captured in an overriding rule.”
When it comes to risk, Ross and Athanassoulis raise specific objections to consequentialism. Evaluations based only on what ultimately happens ignore the contributions of luck; for example, “avoiding the consequences of one’s recklessness does not make one any less responsible for it.” The alternative of assigning probabilities is problematic at best, and the corresponding utilitarian calculation often “clashes with our sense of fairness with respect to the equitable distributions of the burdens of risk taking.” Furthermore, it “does not allow room for differentiating between the bearers of risks and benefits,” who may not be the same parties.
Virtue ethics shifts the focus from individual actions to patterns of behavior – “choices that people make, those choices that are reaffirmed over time, and those choices that express their deeply held values and beliefs.” It is thus concerned primarily with someone’s long-term attitude toward risk, especially with respect to the potential impacts on the well-being of others. The central concept is character, defined by Athanassoulis as “the set of stable, permanent, and well-entrenched dispositions to act in particular ways.” These dispositions qualify as virtues when they enable and incline someone “to respond well to whatever situation is encountered.”
The circumstances of greatest interest to Athanassoulis and Ross are those in which a person – say, an engineer – must intentionally make “choices that involve risk to others”; i.e., when all of the following conditions hold:
- The person is deliberating whether to take a certain action.
- The person cannot guarantee the outcome of that action; there are multiple possibilities, one or more of which would affect others.
- The person is able to estimate (at least roughly) the likelihood of various outcomes.
- Some outcomes are desired, while others are unwanted (by the person and/or others).
- The person perceives the prospect of a positive outcome as outweighing the danger of a negative one.
According to virtue ethics, the last item is crucial and cannot simply be the product of a straightforward cost-benefit analysis. Instead, it requires “a state in which the faculties of perception, motivation, thought, and reason seamlessly interact” to discern the relevant contextual features and properly take them into account – i.e., the exercise of practical judgment or phronesis. Since what subsequently transpires may not be entirely within the person’s control, what matters from an ethical standpoint is the quality of the decision at the time when it is made.
In other words, risk-taking is a good decision whenever it is based on defensible grounds, regardless of the actual results. Athanassoulis and Ross suggest that this criterion is usually satisfied whenever someone has “a clear and accurate view of the situation” and produces “a proportionate, rational response.” The underlying motives – fear, desire for pleasure, etc. – are not necessarily good or bad in themselves; what is important morally is “how, when and why we are moved” by them.
Nevertheless, Ross and Athanassoulis acknowledge a prominent place for emotions in the whole process: “Decisions about risk that proceed from a good character involve emotional responses, which are integral to firm and stable dispositions to virtue … The person of practical wisdom is someone who has the appropriate emotions, to the right degree at the right time.” Such sentiments may seem out of place in an engineering magazine; after all, engineers generally view themselves – and are widely viewed by others – as paragons of unbiased analysis and dispassionate design. But is this an accurate picture? And if so, should it be?
The framework that I have proposed for applying virtue ethics to engineering practice identifies not only objectivity and honesty as moral virtues of engineering, but also care. Is it possible for engineers to exhibit genuine care for the people who will be affected by their work while not experiencing any feelings toward them whatsoever? Can we be completely indifferent and still “hold paramount the safety, health, and welfare of the public” as stipulated by the most fundamental canon in our codes of ethics? Instead, perhaps emotions should play a more explicit role in our decision-making about risk.
In summary, according to Athanassoulis and Ross, “a decision to risk is a complex decision which involves the bringing together of personal reasons for acting, moral reasons for acting and a whole range of facts … Good judgements require phronesis and sensitivity and these are skills that are acquired and internalised through a process of observation and emulation of good exemplars, practice and reflection.” As engineers, we are routinely confronted with such decisions; are we going about them in the right way and preparing ourselves accordingly?▪