×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

How the best forecasters predict events such as election outcomes

Most of us do not spend our lives forecasting the future, but any decision we make depends, in part, on our implicit predictions
Last Updated 21 October 2020, 05:29 IST

The classic interview question that every applicant dreads: tell me about a time when you changed your mind. Most interviewees launch into some canned story about a sudden revelation—an aha moment of transformative insight—intended to demonstrate the candidate’s candor, open-mindedness and analytic skills. The interviewer nods along as they consider if the story is memorable and thoughtful enough to score the candidate that job offer.

But is suddenly changing your mind really a mark of insight? Major revelations make for memorable stories, but our research shows they rarely represent how the best analytic minds revise their beliefs. Rather than doing a 180, those who excel at making accurate predictions tend to change their beliefs gradually. They revise their predictions to reflect new information, but they do so slowly, comparing it with the information they had before.

Most of us do not spend our lives forecasting the future, but any decision we make depends, in part, on our implicit predictions. Who we vote for, what jobs we take and even whether we carry an umbrella out the door all reflect our best guesses about the future: what a political candidate will do if elected, what job is the best fit, the chance of rain that day.

To understand the science of accurate predictions, the Good Judgment Project, a research effort led by Barbara Mellers and Philip Tetlock, where I was a postdoctoral scholar, recruited thousands of volunteer forecasters and asked them nearly 500 questions about the future. Generally geopolitical, these questions focused on topics such as “Will Angela Merkel win the 2013 election for chancellor of Germany?” or “Will there be a significant outbreak of H5N1 in China in 2012?” The participants’ answers were probabilistic. For example, a forecaster might predict that Merkel had a 80 percent chance of winning reelection. Over the course of the four-year study, sponsored by the US intelligence agency Intelligence Advanced Research Projects Activity (IARPA), the team collected more than one million predictive judgments.

Out of the thousands of participants, the Good Judgment Project identified so-called “superforecasters,” those who demonstrated uncanny skills in predicting the future and even did so better than intelligence analysts with access to classified information. To become a superforecaster, volunteers had to make consistently accurate predictions across dozens of questions over a period of at least nine months. Accuracy was the gold standard for talent spotting, but watching the results play out took time. So we hunted for other early clues about who might be especially good at prediction. And one promising clue was the way that forecasters updated their beliefs.

For years, psychologists, political scientists and businesspeople alike have studied how individuals change their mind—and, more specifically, how they revise the probabilities in their head. An example of this revision is when we see a dark sky from the window at noon and project that the chances of rain have gone up from 40 to 80 percent. Changing one’s assessment can be a sign of open-mindedness. As Amazon founder Jeff Bezos has noted, people who are right a lot also change their mind a lot.

But big changes—for example, jumping from 40 to 80 percent—are not necessarily a sign that someone is open-minded. Such sudden reversals may instead be triggered by recency bias, the tendency to overemphasize new information. Sudden changes may also be caused by the availability heuristic, which makes us overemphasize facts and stories that come to mind easily, although not necessarily those that help us predict what’s ahead. Good forecasters resist these tendencies and avoid overreacting to new or particularly memorable information.

The best forecasters must learn to navigate the twin risks of underreaction and overreaction. To find these individuals, we measured their belief-updating tendencies based on three criteria: frequency, confirmation and magnitude. Frequency is how often a person changes their beliefs about a question. Confirmation propensity is the habit of confirming one’s previous beliefs and sticking to the original answer. And magnitude is how much each revision has changed on the probability scale.

We found that individuals with high confirmation propensity were generally inaccurate forecasters—they tended to assign high probability to events that did not happen and low probability to events that did occur. In contrast, those who updated their beliefs frequently were highly accurate forecasters. Finally, individuals who updated their beliefs in small increments outperformed their peers who made more drastic changes.

Like Aesop’s proverbial tortoise, frequent updaters showed better subject-matter knowledge, more open-mindedness and a higher work rate. They were not always spot on with their initial forecasts, but their willingness to change their opinion allowed them to excel over time.

In contrast, incremental updaters resembled Aesop’s hare: they were not especially hardworking, knowledgeable or open-minded. But they scored well on tests of fluid intelligence—which included questions on logical, spatial and mathematical reasoning—and were unusually accurate with their initial estimates for a question.

The best forecasters combined the good qualities of both the tortoise and the hare. The pattern of frequent incremental forecast revisions was a reliable mark of being good at prediction.

How forecasters update their beliefs is a very personal process, drawing on different thinking styles, life philosophies and predictive abilities. But despite this fact, belief-updating techniques can also be taught. We know, because we tested them. We randomly assigned approximately half of the subjects to receive a one-hour predictive-training intervention, while the other half, the control group, received no training. Interestingly, the forecasters who received training subsequently updated their beliefs in more frequent, smaller steps and achieved better accuracy than the control group.

The training materials did not explicitly tell forecasters to make smaller updates. Rather we provided general lessons that they could apply to their practice to balance their initial intuitions.

First, when we encounter opposing evidence, we often ignore one piece in favor of the other. For example, if we come across two election polls, one showing our favored candidate in the lead and the other showing that person trailing, most individuals choose the preferred poll and discount the other as inaccurate. As a forecaster, however, the best tactic would be to average the two. Averaging is generally less work, but it also requires us to compromise and entertain an idea that contradicts our beliefs.

Second, training also helped participants overcome what psychologists Amos Tversky and Daniel Kahneman called the inside view bias, or the tendency to focus on the unique aspects of each situation. Rather than comparing a situation with other similar ones, we focus instead on what makes it unique. And as a result, we often give outsized weight to insignificant factors. For example, when predicting the outcome of an election, we might concentrate on something such as yard signs as an indicator, although the signs are unique to our particular town and situation.

Conversely, the outside view is the practice of examining historical data and considering a given case as one of many. We may ask, “How often does the U.S. presidential candidate leading in mid-October go on to win in November?” To answer this question, we may then construct a reference class of, say, the past 10 presidential elections and count how many times the October poll leader won the general election. The resulting percentage—the base rate—is the outside-view answer. We can’t all be master forecasters, but by taking the outside view and averaging conflicting data, we can inch closer to making better predictions.

If you are interested in practicing and improving your outside-view thinking skills, consider signing up for a National Science Foundation-sponsored study in which our team is working to predict if, and when, Covid-19 vaccines and treatments will pass clinical trials.

ADVERTISEMENT
(Published 21 October 2020, 05:29 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT