Forget the Economic Experts – Research Says You Are Just As Good

Geoff Colvin says we can ditch the experts.

You have been a world-class sap for years. Why? For listening to the economic and political forecasts of experts. We in the media have been irresponsible fools for reporting those forecasts. And the experts themselves? Delusional egomaniacs–and maybe even con artists.

I didn't always think this way. But I've been reading a book that marshals powerful evidence to make this case. For all of us in the world of business, economics, and capital markets–a world that often turns on the judgments of experts–the question is whether we're brave enough to face these uncomfortable facts.

The author, after crunching the numbers on 82,361 forecasts, reached this conclusion.

Experts don't actually exist. Specifically, experts were no better than nonexperts at predicting the future. They weren't even as good as computer programs that merely extrapolate the past. The best experts could not explain more than 20% of the variability in outcomes, but crude algorithms could explain 25% to 30%, and sophisticated algorithms could explain 47%. Consider what this means. On all sorts of questions you care about–Where will the Dow be in two years? Will the federal deficit balloon as baby-boomers retire?–your judgment is as good as the experts'. Not almost as good. Every bit as good.

But wait, there is more. Not all experts are equally bad. The worst are the ones you are most likely to listen to.

The awfulness of Tetlock's experts was almost uniform whether they had doctorates or bachelor's degrees, lots of experience or little, access to classified data or none. He found but one consistent differentiator: fame. The more famous the experts, the worse they performed.

One thing to point out is that I have not read this book, and don't know the details of the research outside of what was presented in the article. Another thing to note is that this applies to forecasting, not problem solving. What is the difference? You may be just as accurate as your mechanic at estimating how long your car will last, but that doesn't mean should you fix it if it breaks down. Applied to economics this would mean that the experts are no better than you at predicting interest rates or other economic indicators, but may better be able to analyze and explain what happened when it is over.

20 Hidden Ways Business Professionals Struggle With Pain

So how does this apply to your business? Well for starters, don't bet the farm on any kind of forecasting be it for finance, sales, or marketing. VCs put so little trust in the revenue projections or entrepreneurs because they are so wrong so much of the time. Marketing departments should likewise take their forecasted product demand with a grain of salt. Secondly, acknowldege the problemabilistic aspect of forecasting, and have a plan for what to do if you are wrong.

The main lesson to take away from this is that chance matters. Some things are random, and you have to acknowledge that randomness in your decision making. We rarely do that.

If this topic interests you, I would highly recommend Fooled by Randomness, a great book that I read upon the recommendation of Barry Ritholtz.

UPDATE: A commenter points to this review, which is much longer and includes the following interesting section.

But he does believe that he discovered something about why some people make better forecasters than other people. It has to do not with what the experts believe but with the way they think. Tetlock uses Isaiah Berlin's metaphor from Archilochus, from his essay on Tolstoy, "The Hedgehog and the Fox," to illustrate the difference. He says:

Low scorers look like hedgehogs: thinkers who "know one big thing," aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who "do not get it," and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible "ad hocery" that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.