When I lived in the UK, it rained a lot. That is to say, it rained often, which is not quite the same thing.
Most of the time, it starts sprinkling, you pop up the hood of your jacket, no worries. But if you’re cycling around town, and you don’t want to arrive at the library or at Sainsbury’s with the thighs of your jeans wet through, you need to time your forays out under the lowering skies carefully.
That’s where the hourly weather forecast came in. I wasn’t much in the habit of checking the weather in Sydney – partly a function of driving most places, partly because, oh that’s right, the sun shines a lot – but after moving to Cambridge, it became compulsive for me.
The curious thing is, the forecast was probably wrong as often as right. If it told me it would be raining at 10am, 2–4pm, and most of the evening, I’d plan my movements around the anticipated dryish spells. But there I’d be at midday, navigating the cobbled streets under a determined drizzle with a vintage wicker bike basket full of damp groceries.
Yet, no matter how many times the forecast let me down, I never stopped being surprised; and I never stopped planning my days around it. I just needed to be told something; and if wild guesses were all that was available, well, I’d take it.
Humans’ failed predictions about the future are inexhaustible. Their number must be even greater than we think because we have a way of singling out the lone prescient voice, and at the same time letting our own miscalculations fall quietly by the wayside.
… We have continually anticipated the end of the world as we know it, and – so far – thankfully – always been wrong.
Predictions of the end of the world are no exception. The guy in a sandwich board informing the rest of us that THE END IS NIGH is a stock cultural type. Historical figures who have gone on record with a precise date for the apocalypse include Botticelli (1504), Christopher Columbus (1658), Puritan minister Cotton Mather (1697 initially, but when that didn’t eventuate he revised his estimate – twice – to 1716 and then 1736), and Rasputin (2013).
When Halley’s Comet returned in 1910, Earth actually passed through its tail, leading astronomer Camille Flammarion to hazard that toxic gases would ‘impregnate the atmosphere and possibly snuff out all life on the planet’ – and spawning a hasty trade in anti-comet pills and anti-comet umbrellas. From Nostradamus to Y2K, from the Prophet Hen of Leeds (google it) to the Heaven’s Gate cult, we have continually anticipated the end of the world as we know it, and – so far – thankfully – always been wrong.
Somewhere in between when it’s going to rain today and when the apocalypse is due lie all the other things we’re bad at predicting. In 1984, a young psychologist and political scientist called Philip E. Tetlock was sitting in a committee meeting on US–Soviet relations and was struck by two things: that the various experts round the table were unshakably confident in their analyses; and that their forecasts flatly contradicted one another.
Over the next 20 years, Tetlock gathered more than 80,000 probability estimates about the future from 284 experts with an average of more than 12 years of experience in their particular specialty. Journalist David Epstein sums up the results:
The experts were, by and large, horrific forecasters. Their areas of specialty, years of experience, and (for some) access to classified information made no difference. They were bad at short-term forecasting and bad at long-term forecasting. They were bad at forecasting in every domain. When experts declared that future events were impossible or nearly impossible, 15 per cent of them occurred nonetheless. When they declared events to be a sure thing, more than one-quarter of them failed to transpire. As the Danish proverb warns, ‘It is difficult to make predictions, especially about the future.’
Epstein notes that the more experience and credentials experts had in their field, the worse they did on average at predicting outcomes within their specialty. When events proved them wrong, instead of going back and revising their premises or methods, they had a way of doubling down – their expertise simply made them all the more adept at fitting whatever happened (or didn’t) into their pre-existing theory.
… In keeping with our proneness to error, it’s not a bad idea to add a decent pinch of salt.
The conclusion that – in the immortal words of British MP Michael Gove during the Brexit debate – ‘people have had enough of experts’ is not the only possible response to such dispiriting news.
Epstein’s book Range: How Generalists Triumph in a Specialized World, as well as Tetlock’s co-authored volume Superforecasting: The Art and Science of Prediction, highlight practices that go some way towards mitigating our propensity for error when it comes to the future – such as working in teams, gathering information from a variety of sources, and adjusting ideas when things don’t go as expected.
Still, history is full of things that seem inevitable in prospect, and ludicrous in retrospect. Conversely, it’s full of things that seemed impossible in prospect and inevitable in retrospect! It is indeed difficult to make predictions, especially about the future.
This does not absolve us of our responsibility to prepare, as best we can, for tomorrow’s challenges and dangers. Most of the direst forebodings of our ancestors did not come to pass; sometimes, that was because we paid attention to them and did what needed to be done. Steering between the Scylla of paralysing panic and the Charybdis of complacency is a task that demands constant vigilance.
Our voracious appetite for forecasting is unlikely to diminish. But in keeping with our proneness to error, it’s not a bad idea to add a decent pinch of salt.
This is an extract from The Pleasures of Pessimism by Natasha Moore, which is available at Koorong.