Expensive research needs to address an existential threat eventually at an organizational level to maintain support

Alignment requires existential threats and Innovation orgs need to be aligned with their money factory so the conclusion that R+D efforts need to address an existential risk to maintain support follows almost by definition.

Distinction between the normative discussion and the practical discussion

This note is going to focus on the realities of how innovation organizations continue to get money in the door. It’s easy for the discussion to go down the track of “Painkillers vs vitamins! If I’d asked people what they wanted, they would have said ‘a faster horse.’ Make things people want! Is physics worthwhile if it never becomes a product?” These are questions of “what should a research organization do?” Which is important and related to “what should a research organization do to stay alive?” For the former, see: ‘Solving problems people don’t even know they have’ is the epicycle of ‘problem-based’ framework of tech development.

Refining the statement

The original title of this note was “R+D orgs need to address existential threats to maintain support.” That statement appeals to a sense of simplicity and there are plenty of examples of research orgs that lost support because they didn’t really address an existential threat (Dynamicland and BP Venture Research come to mind.) However it’s also not true: there is a large class of R+D orgs that are supported but continue to limp along ineffectively (Bell Labs still exists!) So perhaps “Effective R+D orgs need to address existential threats” is more accurate. This would explain the difference between ARPA-E and DARPA, why Bell Labs declined once it no longer staved off regulators, etc.

However! There are plenty of examples of effective research orgs that do not address existential threats. In fact, most great scientists were not actually out to address existential threats - Galileo, Newton, Rutherford, Einstein, etc. They just managed to cobble together enough money from patrons or side hustles to keep going. Patreon-sponsored contemporaries are similar. The notable pattern is that these examples are all individuals or small groups. Aha! It suggests that perhaps there is a Nebulous threshold below which effective research can work off of ‘throw away’ money and above which people start looking at money spent on research as ‘buying’ something - we could call this point “expensive.”

There’s also the niggle that a lot of the work that makes us outsiders perceive a research organization as effective is some of the least existential-problem-addressing that the org does: transistors at Bell Labs, interactive computing at DARPA, etc. However, at the same time Bell Labs was discovering better wire sheathings that saved AT&T billions of dollars and DARPA was working out ways to detect nuclear explosions anywhere in the world. This contrast can be resolved by thinking about addressing existential threats not at the project level but at the organization level.

So, the correct statement is perhaps that expensive research needs to address an existential threat eventually at an organizational level to maintain support. Perhaps this is excessively navel-gaze-y but I think it’s important to try to answer the question “why do so many research organizations suck and how do you build one that doesn’t?”


Clearly, what entails ‘expensive’ research varies wildly. However, I would suggest that contrary to common intuition it isn’t just proportional to how much money you have. There are many examples of massive organizations and billionaires balking at small number relative to their revenue (or GDP.) I could create a hand-wavy explanation involving cognitive biases, paper wealth, or something else. But suffice to say, what entails “expensive” has much more to do with the amount of control a funder has and whether they think they’re going to get their money back than with how much money they actually have. For independent organizations that aren’t investments the ‘expensive’ threshold seems to be reasonable salaries for ~1-3 skilled people. Weirdly it also tends to be independent of the number of funders - ‘inexpensive’ orgs funded by a single billionaire tend to have about the same threshold as those funded by hundreds of people on Patreon.^1 Also keep in mind that this is referring to sustained funding for an organization - one-time project-based funding is another animal.


Most organizations have a ‘grace period’ to dick around.

There’s also a question about the frequency of hits that an organization needs to maintain, which seems to be a function of how big those hits are and how existential the threats they address actually are. DARPA has a 5-10 percent program success rate which I suspect it can get away with because military superiority is very important to the US government, while most charities need to show progress every semi-annual fundraising season.

At an organizational level

I feel like this piece is pretty clutch. The organization needs to address an existential threat.^2 Effective research organizations seem to build a portfolio of projects that address existential threats at a sufficient rate to maintain the perception that they’re addressing existential threats.

This portfolio approach is clutch for several reasons. There is uncertainty around any given project as to whether it will address existential threats and often you can’t honestly answer that question a-priori without doing some work or strangling the project in the cradle. See Loonshots and the concepts of “warty babies” and “the three deaths” for more about this. Additionally, the projects that directly address existential threats for funders and those that are most interesting or globally impactful are often disjoint. The work at Bell Labs that was most valuable to AT+T and the work that was most valuable to the world was fairly disjoint. In the same way that winners pay for duds in a VC portfolio, aligned work can cover for misaligned work if the alignment is considered at an organizational instead of a project level.

If the organization produces aligned hits at a sufficient rate, it ideally can create a §Trusted Hierarchy where the trust (and money) is flowing down the power ladder. Congress trusts the DARPA director who trusts the deputy director who trusts the PM. The PM doesn’t need to justify a program to congress, but instead to the deputy director. This is why Opacity is important to DARPA’s outlier success - it can work on crazy shit that would never get a-priori funding from congress, but it needs to earn the trust to maintain that opacity by delivering as an organization.

Another way this manifests is through contract research organizations that fund themselves through contracts or grants, but use that money to fund internal research.


^1: This may have to do more with the people seeking the funding - people don’t feel comfortable with asking for more money than they need to live?
^2:Or at least be perceived to be addressing existential threats - there are many organizations that successfully maintain funding based purely on great marketing. It’s worth paying attention to why this marketing works but I’m going to otherwise ignore it because I am bad at pulling off false pretenses and would prefer if the world had fewer of them.

Web URL for this note

Comment on this note