Forget the Economic Experts – Research Says You Are Just As Good

Geoff Colvin says we can ditch the experts.

You have been a world-class sap for years. Why? For listening to the economic and political forecasts of experts. We in the media have been irresponsible fools for reporting those forecasts. And the experts themselves? Delusional egomaniacs–and maybe even con artists.

I didn't always think this way. But I've been reading a book that marshals powerful evidence to make this case. For all of us in the world of business, economics, and capital markets–a world that often turns on the judgments of experts–the question is whether we're brave enough to face these uncomfortable facts.

The author, after crunching the numbers on 82,361 forecasts, reached this conclusion.

Experts don't actually exist. Specifically, experts were no better than nonexperts at predicting the future. They weren't even as good as computer programs that merely extrapolate the past. The best experts could not explain more than 20% of the variability in outcomes, but crude algorithms could explain 25% to 30%, and sophisticated algorithms could explain 47%. Consider what this means. On all sorts of questions you care about–Where will the Dow be in two years? Will the federal deficit balloon as baby-boomers retire?–your judgment is as good as the experts'. Not almost as good. Every bit as good.

But wait, there is more. Not all experts are equally bad. The worst are the ones you are most likely to listen to.

The awfulness of Tetlock's experts was almost uniform whether they had doctorates or bachelor's degrees, lots of experience or little, access to classified data or none. He found but one consistent differentiator: fame. The more famous the experts, the worse they performed.

One thing to point out is that I have not read this book, and don't know the details of the research outside of what was presented in the article. Another thing to note is that this applies to forecasting, not problem solving. What is the difference? You may be just as accurate as your mechanic at estimating how long your car will last, but that doesn't mean should you fix it if it breaks down. Applied to economics this would mean that the experts are no better than you at predicting interest rates or other economic indicators, but may better be able to analyze and explain what happened when it is over.

So how does this apply to your business? Well for starters, don't bet the farm on any kind of forecasting be it for finance, sales, or marketing. VCs put so little trust in the revenue projections or entrepreneurs because they are so wrong so much of the time. Marketing departments should likewise take their forecasted product demand with a grain of salt. Secondly, acknowldege the problemabilistic aspect of forecasting, and have a plan for what to do if you are wrong.

The main lesson to take away from this is that chance matters. Some things are random, and you have to acknowledge that randomness in your decision making. We rarely do that.

If this topic interests you, I would highly recommend Fooled by Randomness, a great book that I read upon the recommendation of Barry Ritholtz.

UPDATE: A commenter points to this review, which is much longer and includes the following interesting section.

But he does believe that he discovered something about why some people make better forecasters than other people. It has to do not with what the experts believe but with the way they think. Tetlock uses Isaiah Berlin's metaphor from Archilochus, from his essay on Tolstoy, "The Hedgehog and the Fox," to illustrate the difference. He says:

Low scorers look like hedgehogs: thinkers who "know one big thing," aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who "do not get it," and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible "ad hocery" that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.

  • Yes – very good book. But I liked this review more:

  • Pablo that’s a great review. Thanks.

    Another note on what this means. Credentials (advanced degrees, prestige appointments, etc.) are a sucker’s bet when you are deciding who to believe.

    Also write down the predictions of pundits and look them over later. You’ll be surprised (and mortified) by how often they say things that are so vague and impossible to measure that they are meaningless. Pundits depend on our attention deficits and short-term memories.

  • There’s no doubt that there is an excess of bad forecasting. The best way to fix it is to force them to put their money where their mouth is – public forecasting “betting pool” judged by independent judges who can review forecasts and score people.

    Actually, that sounds like a great idea for a new Web service!

  • Rob

    I think this would make a very interesting topic for your next book. You could look at what types of things experts are good at, and what they aren’t. It would help people decide when to listen to them. It would also help experts decide where to focus their time and education.

  • Jason

    I haven’t read the book either, but I did notice in the New Yorker review the notable absence of those involved in the “hard sciences” (I include biology in that grouping). Which makes sense – science is in fact very good at predicting results.

    My suspicion (and according to Tetlock, pRobably a wrong suspicion :)) is that in the case of the business/economic world, we have the process backwards. In science, you ask how something works, then why it works. For various political and personal reasons, in the business/economic world, you go in with a theory of why things work, then force the data to fit.

    The difference, as Taleb and others have alluded to, is that you cannot fool the truth forever, as competition will eventually force the pretenders out. Of course, TV experts don’t suffer from this pRoblem specifically because they don’t make specific, well-defined predictions. Or rather, they fudge it with tons of qualifiers.

    Socrates really nailed the point on the head – he was the wisest man because he knew that he knew nothing.