Yes, No, Maybe

In the abstruse world of decision theory, it’s not what you think but how you think it. By Steve Weinberg. Illustrations by Brian Stauffer

Who shall decide when doctors disagree / And soundest casuists doubt, like you and me?” So asked the poet Alexander Pope in 1714, his plaintive lines echoing the sort of uncertainty thinking people, down through the ages, have struggled to overcome.

Professional philosophers, a class of scholars who might best be described as thinking persons’ thinkers, are perhaps more prone than most to ponder the nature of doubt and decisions.

Consider Paul Weirich, a philosophy professor specializing in “normative decision theory” at MU. Back in 1999, Weirich was already a successful scholar with an impressive oeuvre focused on individual and group decision making. Now he faced a seemingly straightforward career choice, one that most ambitious people would consider a no-brainer: Should I accept the job of department chair?

Rule number one in philosophy: There is no such thing as a no-brainer.

Tyranny of Choice

As a normative decision theorist, Weirich is paid to think, lecture and write books about “how we ought to make decisions.” Rather than describe how things are, Weirich asks questions about how things ought to be; that is the “normative,” or “prescriptive” part of his research.

The idea is that human beings are not now, nor will they ever be, perfect deciders. Making decisions is hard: Choices are often complex, available information incomplete or unreliable; multiple objectives are often at odds with one another; well-meaning people, or groups of people, frequently disagree on key elements of the question at hand. Normative decision theory seeks to articulate a way forward, not by recommending answers but, as the scholar Robert Clemen has written, by providing “insight about the situation, uncertainty, objectives and trade-offs, possibly yielding a recommended course of action.” On a practical level this means using theory to develop tools that guide people toward better choices.

Unlike some forms of philosophical inquiry, decision theory and its practical applications have the potential for plenty of real-world influence. Weirich’s research, for instance, aims to encourage rational decision making within government agencies, corporate board rooms and, indeed, university administrations. Not surprisingly, it turns out that advocating rationality is not always popular in these circles. As Pope’s lines suggest, people, especially people with high IQs, often fail to agree on the rational way. This is particularly true in situations where there are seemingly rational grounds for deciding in favor of both sides of a question.

Weirich is sympathetic. He admits, in fact, that his own decisions haven’t always come easy. Which brings us back to the MU chairmanship. Weirich says he grappled with the pluses and minuses of heading his department. The upside, he says, “was a chance to help the department move forward with good hires in core areas of philosophy such as epistemology, metaphysics and ethics.” The downside? “Putting my research on hold. I knew being chair is a time-consuming job.”

After much deliberation, he says, helping “improve the department won the conflict. Its improvement makes my career better, too.”

All that thinking about making rational decisions could be imagined as brain-achy—especially when the problem solving expands from individuals to groups. This is precisely the brain ache Weirich has invited with his most recent research.

Democracy in Action

Nowhere is the burden of group decision making more pronounced than in government, where our democracy demands the participation of many. Weirich’s particular interest involves the decision-making processes used by government regulators of human health and safety. As one can imagine, there are few areas in government more fraught with disagreement. Here, even the analysis of decision making—much less the regulatory decisions themselves—is subject to fierce, often public, debate.

Weirich is up to the task. One of his most frequent sparring partners is Cass R. Sunstein, a prolific writer who serves on the law faculties at both the University of Chicago and Harvard University. Sunstein currently serves President Barack Obama as administrator of the Office of Information and Regulatory Affairs—the White House “regulations czar.”

In the journal Behavioral and Brain Sciences, Sunstein posited that individuals, including government regulators, sometimes make bad decisions because of reliance on heuristics—mental and moral shortcuts colloquially known as “rules of thumb.”

Weirich responded in the journal that Sunstein’s paradigm felt simplistic. “Sunstein’s map of moral reasoning omits some prominent contours,” Weirich wrote. “The simple heuristics he suggests neglect a reasoner’s attempt to balance the pros and cons of regulating a risk.”

Weirich used examples of automobile safety (including air bags), emissions trading by corporations that pollute the atmosphere, and food from genetically engineered crops as fuel during the debate with Sunstein.

“Sunstein presents the heuristic ‘do not tamper with nature’...to explain public support for regulation of genetic engineering,” Weirich commented, then noted other explanations that might hold more validity. “For example, the public may not trust scientific assessments of risk...because they are sponsored by industries heavily invested in ag biotechnology.”

Sunstein responded critically, noting Weirich “provides no evidence to support his views about how ordinary people reason, and some of his claims seem to me implausible.” The exchange made for a fascinating disagreement with undeniable implications for government policy making.

Genetically modified food—a big deal in an agricultural powerhouse state like Missouri—has served as the foundation for much of Weirich’s involvement in the real world.

A book he edited for Oxford University Press constitutes a compelling example. The book, an MU Research Council-supported collection of essays titled Labeling Genetically Modified Food: The Philosophical and Legal Debate, is an extension of an international conference held on the MU campus in 2005.

The intensity of the debate increased when the U.S. government adopted a laissez faire attitude compared to the European Union, where a governing body decided to require labeling. Such divisions among nations complicate policy making in developing countries. Their citizens might benefit from crops genetically modified to improve nutrition, bolster resistance to disease and drought, and cut use of pesticides. But governments of some developing nations nevertheless resist making genetically modified foods easily available to their citizens.

In his contribution to the book, Weirich considers the question of whether the USDA should require mandatory labeling for food products with genetically modified ingredients, or whether companies should, on the basis of consumer preference, be allowed to determine what information appears on labels.

Weirich grapples with the much-debated dilemma by examining the costs and benefits within the context of risk to consumers. “What proofs of risk reduction justify [an FDA] regulation?” he asks before introducing his chain of logic into the debate. “Should a government regulate to reduce risk only after scientific investigation yields a full and conclusive assessment of the risk?”

Not necessarily, Weirich posits. “Democratic principles support regulation to reduce imperfectly understood risks. Citizens may reasonably seek relief from those risks,” he notes. “A regulatory agency making decisions on behalf of the public should blend expert opinion and the public’s interests to justify steps in areas such as food labeling.”

The Answer Man

Growing up in the Chicago suburb of Arlington Heights, Weirich chose St. Louis University after high school graduation, intending to major in physics. Instead he earned a bachelor’s degree in philosophy. “I switched to philosophy because the theories of that field held special attraction for me,” Weirich says. “Logic is a big part of philosophy, and modern logic is math oriented.” The flexibility of his undergraduate mind is suggested not only by his fascination with philosophy and physics, but also by his minor in French and his membership in a mathematics honor fraternity.

Weirich continued his higher education at the University of California-Los Angeles. Weirich’s doctoral dissertation carried the title Probability and Utility for Decision Theory.

Having taught part-time in the UCLA philosophy program, Weirich felt he had found his calling in the classroom. His teaching led him to the University of New Orleans, the University of Rochester and California State Polytechnic University in Pomona before he heard about an opening at the University of Missouri. In 1988, Weirich interviewed on campus, where he already knew a young philosophy professor named Peter Markie. (The philosophy faculty then numbered eight. The size of the department would more than double in the next two decades.)

After joining the University of Missouri faculty, Weirich began to expand his thinking so that it filled books. The first appeared in 1998: Equilibrium and Rationality, from Cambridge University Press, explores the principle of maximizing expected utility during decision making. The newest book, Collective Rationality, appeared this year from Oxford University Press.

Although decision theory might seem common sensical to non-specialists, Weirich’s academic vocabulary is dense. The logic undergirding his conclusions takes intense concentration to follow. Other academics in Weirich’s field are practiced in parsing the logic, and come away impressed.

“Paul Weirich is a leading figure in those areas of philosophy that relate to decision theory and game theory, clearly ranking among the top four or five philosophers working in these areas,” says James Joyce, a University of Michigan philosophy professor. “He has been instrumental in shaping the character and the direction of research in the field.”

Weirich’s colleagues at MU marvel not only at his influence, but also at his research productivity. In a letter endorsing Weirich for an award, philosophy professors Peter Markie, Andrew Melnyk and Peter Vallentyne say his output “is truly stunning.”

“Here it is important to keep in mind that in philosophy, unlike in the sciences, joint authorship is relatively rare, and most productive scholars average only about one solid article per year, often with no books published,” they wrote. Weirich, on the other hand, has published more than 50 articles and book chapters, edited one book, and is the author of four books under his own name. He also publishes in his field’s most prestigious journals and academic book presses. In the classroom, Weirich says, “I try to help students see the importance for decision making of understanding probability and how it works. I also try to show them how to reason strategically—to think about how others might respond to their decisions.”

Collectively Rational

Given the debate over reforming health care across the United States, Weirich finds it timely to explain his sophisticated concepts within a health care context.

“Making decisions about medical technology that impact public policy and federal mandates requires collective rationality,” he says. “Many groups, including experts and the public, have stakes in medical technology, and their agendas might not be the same. Unlike individual rationality, collective rationality requires coordination and communication among individual entities. Standards of rationality should be consistent when making decisions that have important consequences.”

Even when Weirich can help disparate groups reach rational decisions by following the principles he sets out in his books, he understands that “it is impossible to eliminate all risk.” If risk taking turns out badly, the risk taking might have been rational, and therefore would not deserve the condemnation legitimately reserved for irrational decisions leading to risk. Raising a question that perhaps sounds rhetorical, Weirich asks, “Should we eliminate a medical technology that has a very small chance of doing harm even if it will help many others? Having standards of rationality will help agencies make these decisions.”

Knowing that his new book could benefit decision making among nurses, often caught in the real-world crossfire of the health care debate, Weirich explained its utility in a presentation titled “Collective Rationality in Nursing.” Suppose, he says, that a committee of nurses at a hospital “conducts two rounds of voting. In the first round, one majority votes for assigning nurses to patients to match a nurse’s speciality with a patient’s illness. In the second round, another majority votes for a patient having the same nurse throughout a stay in the hospital. Then, if a patient’s illness changes during a stay in the hospital, the pair of policies yields an inconsistent assignment of nurses to the patient.”

Weirich knows that well-intentioned committees sometimes fail to grasp how separate decisions they make fail to meet all applicable standards of rationality when examined in toto. “A theory of collective rationality offers direction,” he says.

Nurses sometimes cannot find time to meet, given their urgent duties. Or the meeting might occur, but feel rushed. Weirich suggests strategies to minimize those shortcomings, such as selecting a leader who will make the final decision. The elected leader might be a physician. Other members of the group—nurses, social workers, dietitians and family members of patients—“may voice opinions to inform the physician before she adopts a plan for the group,” he says. “Also, if a group’s members disagree because their information differs, then they may pool their information and use shared rules of reasoning to reach a consensus.”

Because achieving collective rationality is rarely easy, Weirich suggests preparation. “For example, a group of nurses may appoint some leaders to identify and take the first step toward optimal coordination,” he says. “Two or three nurses contemplating a joint act may reason strategically about how other nurses will respond to their act. Because of coordination’s value, some nurses moving toward a way of coordinating will attract other nurses to that way of coordinating. Also, the group may foster a culture of trust so that nurses who boldly take the first step toward optimal coordination are confident that others will follow.”

In the end, Weirich’s theorizing has led him to what might strike lay readers as unbounded optimism. “In every decision problem, some action is rational,” he says. “Given this judgment, one must either abandon the position that stability is necessary for rationality, or else revise the standard of stability. The best response, I believe, is to revise the standard so that in every decision problem some action is a stable choice.”

Back to Top

University of Missouri

Published by the Office of Research

© 2017 The Curators of the University of Missouri