Tuesday, September 6, 2011

The perils of prediction - Adventures in punditry - The economist

Source: http://www.economist.com/blogs/freeexchange/2011/09/perils-prediction?fsrc=rscn/fb/wl/bl/adventuresinpunditry

The perils of prediction

Adventures in punditry

Sep 2nd 2011, 20:32 by E.G. | AUSTIN





JOURNALISTS have a professional obligation to produce reasoned analysis, and this occasionally leads us to take a step too far, into predictions. As a journalist wary of the hazards of forecasting, the “Good Judgment Project”, a joint effort from the University of Pennsylvania and the University of California-Berkeley, caught my eye. One of its architects is Philip Tetlock, a political scientist who mildly traumatised the pundit class with his research finding that over a 20-year span, a group of “experts”—mostly political scientists and economists—were only slightly better at predicting political outcomes than Berkeley undergraduates or for that matter random guessing. Now Mr Tetlock and his colleagues are assembling teams of “forecasters”, who are being asked to complete some training and then compete in a government-sponsored forecasting tournament over the next four years.

The training is designed to encourage forecasters to think systematically about their biases and to actively anticipate different scenarios. I just completed the first round of training, and it was an interesting exercise. In the scenarios they presented, the premise was that forecasters tend to be overconfident in their judgments. You might start out thinking there’s a 90% chance that Kim Jong-Il will still be in charge of North Korea at the end of next year, but after considering the possibility that he might lose elite support, or that Chinese subsidies might be withdrawn, maybe you end up thinking there’s about a 74% chance.

Interestingly, these scenarios mostly affected the confidence level of the forecaster, not the content of his or her prediction. So my reaction was that this kind of process may encourage more accurate predictions, but they may be more accurate because they’re less specific. For example, if you aspire to be 90% sure of what you’re saying, after thinking through all the exogeneous factors, you might predict that the price of Brent crude oil will be between $50 and $250 dollars at the end of next year, rather than between $80 and $120.

That may be more accurate, but it’s also less useful. There are certain decisions you would take if you thought oil would cost $100 a barrel, and they’re different than the decisions you would take if you effectively had no clue. And with regard to the political economy of punditry, we are sometimes asked to take rather stronger stances. My editor got a bit exasperated with me earlier this summer, when I was firmly arguing that Texas governor Rick Perry might run for president; under inquiry, all I could do was clarify the conditions under which he would run and factors that might keep him on the sidelines, but for some reason this wasn’t considered a compelling line of argument.

More to the point, though, is that sometimes less confident predictions are nonetheless worth making. When we’re writing about politics, or the economy—areas in which changes are routinely expected, and the anticipation of future developments affects people’s behaviour in the present—it may even be necessary. The perceived likelihood of those future developments affects both actual behaviour and expectations, and the expectations themselves have effects, as we see today, in the stock market’s response to the jobs report. If people are going to make decisions based on predictions about the future, it may be better for them to proceed with confidence, even if they are professing more confidence than they actually feel.

So how are we to understand these exercises in punditry? Mr Tetlock has long argued that what we need is some form of accountability in punditry. As it is there is rarely a real penalty to the expert for being wrong, even if other people suffer consequences. But we also need to preserve some space for experts to be wrong, because otherwise they would be dissuaded from going out on a limb about anything (perhaps ceding the space to less scrupulous observers). It may be that we need to be more actively uncomfortable with uncertainty, and ask our forecasters to qualify their predictions by stating their assumptions and articulating the counterfactuals as far as they can see them.




No comments: