Experts Put to the Test: Are Their Predictions Better Than Random Guesses?
Over 20 years, from 1984 to 2003, Tetlock conducted extensive prediction competitions. He gathered almost 300 experts from various fields—academics, government officials, journalists, and the like—and asked them to make general forecasts across a variety of political and economic scenarios, including those in their areas of expertise.
(Photo: shutterstock)If scientists can speak with such confidence about events that happened a billion years ago, it's likely that their predictions about scientific developments in the near future should be accurate, right?
It's rare for a social scientist to critically turn the spotlight of social science on the discipline itself. Professor Philip Tetlock, a political scientist from the University of Pennsylvania, is one such individual. He is credited with conducting the most thorough examination of the ability of self-proclaimed experts in the social sciences to predict future events.
Over 20 years, from 1984 to 2003, Tetlock carried out extensive prediction competitions. He gathered nearly 300 experts from various fields, including academics, government officials, and journalists, and asked them to make general forecasts across different political and economic scenarios, including those within their fields of expertise. For example, Tetlock asked the experts to choose among three possible outcomes for a country's economy: growth, status quo, or decline.
In total, Tetlock examined over 80,000 predictions, and the results were grim. The forecasts for a span of a few years were almost equivalent to chance, indicating no discernible expertise or professionalism, even in the forecasters' supposed areas of expertise. They would have achieved better outcomes if they had accepted that all three outcomes were equally likely.
Tetlock writes: "The forecasts of experts in their field were no more precise or informed than those of amateur trespassers... when it comes to knowledge, we reach a point of diminishing returns that's disturbingly glaring." Furthermore, perhaps a bit sad to hear, the higher the public profile of the forecaster, the worse their predictions tended to be, suggesting that their "expertise" was in failing to make accurate predictions.
Another intriguing finding by Tetlock concerns the confidence gap of the "experts." Even though their predictions did not exceed those of amateurs, the self-proclaimed experts exhibited significant self-confidence. When Tetlock asked them to explain their choices, the "experts" offered more elaborate and complex rationales than the amateurs. But examining the practical outcomes revealed these stories to be biases, pretensions to knowledge—or even self-deception.
So, it appears that social science "experts" sound impressive. They possess jargon, theories, statistics, models, and vast domain-specific knowledge. Yet, practically speaking, they have no advantage in predicting over non-experts.
Tetlock's conclusions should be groundbreaking. He notes: "In an era of academic hyper-specialization, there's no reason to assume that writers in prestigious journals—reputable political scientists, field experts, economists, etc.—are any better than journalists or attentive readers of The New York Times in interpreting evolving situations... the analytical skills underpinning academic fame do not translate to predictive advantages and belief-revision exercises..."
It's a sobering finding and calls for a bit of humility...
