That is the title of a highly recommended discussion being hosted this month at Cato Unbound:

  • The editors’ introduction is here.
  • Dan Gardner and Philip Tetlock’s lead essay, ‘Overcoming Our Aversion to Acknowledging Our Ignorance’, is here.
  • Robin Hanson’s reaction essay, ‘Who Cares About Forecast Accuracy?’, is here.
  • John Cochrane’s reaction essay, ‘In Defense of the Hedgehogs’, is here.
  • Bruce Bueno de Mesquita’s reaction essay, ‘Fox-Hedging or Knowing: One Big Way to Know Many Things’, is here.

Some highlights:

Every year, corporations and governments spend staggering amounts of money on forecasting and one might think they would be keenly interested in determining the worth of their purchases and ensuring they are the very best available. But most aren’t. They spend little or nothing analyzing the accuracy of forecasts and not much more on research to develop and compare forecasting methods. Some even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.”

-Tetlock

Even in business, champions need to assemble supporting political coalitions to create and sustain large projects. As such coalitions are not lightly disbanded, they are reluctant to allow last minute forecast changes to threaten project support. It is often more important to assemble crowds of supporting “yes-men” to signal sufficient support, than it is to get accurate feedback and updates on project success. Also, since project failures are often followed by a search for scapegoats, project managers are reluctant to allow the creation of records showing that respected sources seriously questioned their project.

-Hanson

Now “forecasting” as Gardner and Tetlock characterize it, is an attempt to figure out which event really will happen, whether the coin will land on heads or tails, and then make a plan based on that knowledge. It’s a fool’s game.

Once we recognize that uncertainty will always remain, risk management rather than forecasting is much wiser…The good use of “forecasting” is to get a better handle on probabilities, so we focus our risk management resources on the most important events. But we must still pay attention to events, and buy insurance against them, based as much on the painfulness of the event as on its probability. (Note to economics techies: what matters is the risk-neutral probability, probability weighted by marginal utility.)

-Cochrane

Good prediction—and this is my belief—comes from dependence on logic and evidence to draw inferences about the causal path from facts to outcomes. Unfortunately, government, business, and the media assume that expertise—knowing the history, culture, mores, and language of a place, for instance—is sufficient to anticipate the unfolding of events. Indeed, too often many of us dismiss approaches to prediction that require knowledge of statistical methods, mathematics, and systematic research design. We seem to prefer “wisdom” over science, even though the evidence shows that the application of the scientific method, with all of its demands, outperforms experts.

-Bueno de Mesquita

Related Analyst First posts:

2 Responses to *What’s Wrong with Expert Predictions*

  1. Tapir says:

    One could question why Darwinian forces don’t allow for scientific methods to thrive? Perhaps the political cost of failure is higher than the perceived or real value of success.

  2. [...] further reflection on Cato Unbound’s What’s Wrong with Expert Prediction debate (see here and here) is that Gardner and Tetlock are correct in the aggregate while Cochrane is correct [...]

Set your Twitter account name in your settings to use the TwitterBar Section.