Uncertainty always involves risk but risk does not always involve uncertainty

Risk entails a not knowing what the outcome will be but implies that you know the probability distribution of those outcomes (or at least that there is a fixed and knowable probability distribution over all possible outcomes). Risk needs a metric. Uncertainty entails not knowing the potential outcomes or their probability distribution.

Colloquially, when we talk about something being “high risk,” we actually mean several different things. While teasing them apart might just be linguistic navel gazing, I believe it’s worthwhile because Naming things is Powerful and I think that (especially in the areas of research) “risk” has become a Suitcase Handle Word.

When we’re talking about a risky personal activity, generally we mean ’N really bad things happen per the number of person-hours/discrete instances of this activity, where N is above some threshold.’ Notably, the ‘personal risk’ in this case is an aggregate of many people and circumstances. The ‘real’ distribution for any given individual might not resemble the aggregate distribution at all.

When we’re talking about a risky investment, generally we mean ‘if you run the world many times, over all those scenarios the investment will yield a good outcome N or fewer times, where N is smaller than some threshold.’^1 At the end of the day, the outcome is always denominated in ~dollars, which means that it is extremely measurable and comparable. The resemblance between ‘financial risk’ and ‘personal risk’ varies depending on the type of investment, hinging on how well we actually know the probability distribution. On one of the spectrum, an investment like government bonds for a country that is near the edge of default has few enough relevant parameters that it can reasonably be compared to other investments that have happened before and the statistics from those investments are a valid estimate for the probability distribution of the investment’s outcomes. It is legitimately high risk. On the other end of the spectrum, you have something like investing in the equity of a weird startup. Not only does the investment have so many parameters that it makes it almost impossible to compare to other investments that are nominally in the same class.^2 While we might consider investing in the startup to be high risk, I would argue that it is actually high uncertainty - you don’t even know the shape of the probability curve.

Research tends to resemble the ‘uncertain’ end of the investing curve, but even more so because it doesn’t even lead to legible outcomes. At least when you’re investing in a startup, you can ask “after 10 years, do you have more or less money than when you started?” After a research project, people can argue forever about whether it was “successful.” Even doing alchemy (which is a canonical example of experiments that didn’t work) arguably led to the knowledge that would unlock chemistry that saved billions of lives and made modern standards of living possible. This attribute of research, that by its very nature it is hard to scope its outcomes is one reason that it is hard to argue that you can talk about it as though it has a known underlying probability distribution. Another aspect of research is that it is inherently trying to do something that has never been done before, (in many cases) to a much more extreme extent than a startup. At the end of the day, a startup is trying to create a successful business. Successful businesses resemble each other much more than successful research projects.

As a platonic ideal, research is an unscheduled quest for new knowledge and inventions with hard-to-predict outcomes. The newness of research means that it is very hard to come to a consensus on the underlying probability distribution of the outcomes. The person who has the most context, and thus the best estimate of the distribution, is most likely the researcher themselves.

In Scientific Freedom - The Elixir of Civilization, Braben uses the example of a skydiver to illustrate the importance of context knowledge. I’ll use the example of climbing because it is also a ‘risky’ activity that I’m much more familiar with. The numbers here are terrible so I’ll make some up. Say that on average, one out of every 10,000 climbs leads to someone dying. However, that’s lumping together people who are free soloing on El Capitan with people who are top-roping on some nice granite (ie. There are many fewer ways you can hurt yourself.) Now, if you took the person who is used to top-roping and stick them on El Capitan without a rope they will probably die, for sure. But Alex Honnald climbed the route (dozens, hundreds?) of times without a rope. He knew his own capabilities and he had all the skin in the game. So he knew his own outcome distribution much better than anyone else and it is clearly different than what statistics would suggest. One could argue that he just has a higher risk tolerance than other people. While that may be true (he certainly has less fear than other people) he talks about it in a way that he suggests he rationally doesn’t see it as high risk. His actions back this up - he bailed on his first attempt when he wasn’t ‘feeling it.’

The free solo example suggests that we perhaps conflate the probability of a bad outcome happening with how bad that outcome is if it happens. This way of thinking makes sense in the context of money (or other numerically legible outcomes) because you can do an expected value calculation. But I would argue (against many economists) that the outcomes of many activities can’t be measured.

Both Braben and the free solo example suggest that Skin in the game (SITG) is a way to force people to be honest about what their special knowledge suggests about underlying distributions in high-uncertainty situations. This ‘distribution-extraction’ is also what betting markets are supposed to do.

This is perhaps me trying to put Knightian Uncertainty in different words:
Frank Knight wrote about this in 1921 in a great book called **Risk, Uncertainty and Profit**(which you can read [here](https://www.econlib.org/library/Knight/knRUP.html) ). He distinguished between two types of uncertainty. The first type is when we know the potential outcomes in advance, and we may even know the odds of these outcomes in advance. Knight calls this type of uncertainty risk.
— from https://www.businessinsider.com/difference-between-risk-and-uncertainty-2013-3

If we frame risk as a distribution over an outcome, the way to think about the relationship between uncertainty and risk is that Uncertain things basically have infinite risks because you don’t even know the risks that you are running.

This note is admittedly a little rambly because I’m still wrapping my head around how to think about this.


^1: I realize that ackshually there are technical definitions of risk. However, in most situations I don’t think any of us are using the word in that sense.
^2: It’s debatable how many parameters matter to a startup’s success - VCs spend a lot of time arguing about this. Their incentive is to argue that there are actually a knowable few that matter but that they are legible only to a few people including themselves, which allows them to uniquely and correctly assess the startup’s risk curve.

Web URL for this note

Comment on this note