Most predictions suck

When airplanes were sold to the US Army in 1909 the common idea of the time was “With the perfect development of the airplane, wars will be only an incident of past ages.” (from Wardley Maps) Most predictions look like this. To be fair to the people who predicted this - at the time it seemed like a perfectly reasonable prediction. It was just based on a set of hidden assumptions that changed as soon as you could mount a gun on a plane and build flak cannons.

A good prediction enables new thoughts and actions by being non-obvious and creating agency. Most predictions fail on one or both of these conditions.

Most predictions aren’t falsifiable. “VR is the future of work!” without a timescale or a reason why is a bad prediction because even if the statement is true, you would take very different actions if everybody will be working in VR in five years vs fifty years.

Most predictions present their vision as inevitable. ‘This is what is going to happen’ to you does not suggest any action or give you any agency. Most predictions just confirm priors, and extend obvious trend lines, which is obvious and doesn’t generate any new thoughts.

Most predictions follow the pattern ‘what is the future of X?’ or ‘How will X change the world?’ (Insert your favorite hype word for X: IoT, AI, VR, Blockchain, etc.) This stance assumes that everything else in the world is held constant. See everything from the future of computers as recipe books for 1960’s housewives to malthusian visions of doom. You could think of these predictions as single-variate, first order Taylor expansions.

These first order predictions are fragile. They don’t give you much insight into the ‘underlying function’ they’re based off of, so it’s hard to play with with the ideas to create your own mental models and opinions. (Focused play enables serendipity.) If you don’t agree with one assumption the entire prediction no longer holds. A report on ‘the future of AI’ could conceivably go through all the realms of life and enumerate how AI will affect each of them at some fidelity, leaving no room for someone else to say “what if…?” Self-consistent worlds enable you to draw off of the edge of the map but first order predictions have no mechanism to enforce self-consistency. They only look at the effect of a single variable on different areas without looking at how those changes would in turn interact. “If AI is both going to make universities irrelevant and replace the repetitive tasks that interns would normally do, how are people going to signal to future employers?”

Most predictions suck because people making them have no Skin in the game (SITG) of making accurate predictions.

The article Why speculate makes many of these points in a much more incisive way and goes into the consequences of what happens when we let bad predictions run rampant and why they persist!

Of course, all of this assumes that most predictions want to be good predictions. Luke Constable Podcast points out that perhaps most predictions suck because Most predictions are not aiming for accuracy.

A caveat: This all assumes that the goal of a prediction is as a decision-making tool. Predictions are either for decision making or entertainment — predictions that are poor decision tools can make great entertainment.

Related

Web URL for this note

Comment on this note