The uncertainty inherent in research means that there are only a few people (if any) who understand the underlying outcome distribution of doing the work. Uncertainty always involves risk but risk does not always involve uncertainty. Unless you are one of those people and you have the resources to undertake the work (We should strive to unbundle Elon Musks Brain) the person with the resources needs to trust the person with the underlying distribution.
Abstractly, research requires doing weird things by its very nature. In order to create things that nobody has created you need to do things that nobody has ever done. Research often involves long periods without legible artifacts and sudden direction changes when you realize that one line of inquiry is no good or another opens up. It requires trust to be comfortable with someone doing all those things while you’re paying them or your reputation rests on them. This pattern is present in other disciplines, but in startups the output is more legible ($$$) and in art, the stakes are usually lower.
Research also requires trust around the quality of the work and results. While theoretically a paper should clearly communicate everything a researcher did and enable you to replicate their results, anybody who has done research knows this is basically BS. There are so many ways that research can be misleading. This isn’t about fabricated data and outright lies (though you do need to trust that too.) Results can be exaggerated, conclusions can subtly not follow, experiments can be cherry-picked, a million regressions can be tried before one works, constraints can be ignored, etc. Many of the subtleties that make the difference between bad and good research are not even conscious on the part of the researcher. There is no way to enumerate and check for all possible problems so it comes down to trusting that the researcher did good work.
Empirically, you see over and over again that high-output research organizations have higher-than-normal levels of trust. DARPA is full of stories where someone walks out of a ~20 minute meeting with millions of dollars. That can only happen because of trust. Bell Labs management was extremely light and the specific stories make it clear that it was light because the managers trusted the researchers to do good work. Highly-productive “scenes” like the physics community in the early 20th century were small and everybody knew each other.
Legibility is one way that people avoid the need for interpersonal trust. Legibility enables action but when you’re trying to create or discover something new, it’s hard to know what to even measure and prematurely imposing measurement often leads to negative effects. Many things cannot be measured well.
Arguably, the terrible effects of peer review (Peer review has failed as an institution) came out of a desire to replace trust with legibility in research, or at least make that trust more legible. Trust is hard to scale, and as the number of scientists and the amount of funding increased, funders and other scientists needed a legible way to judge the quality of research.
Andy Matuschak conversation 9 May 2021 pointed out that research also requires much more self trust. By its nature, research is a very “lumpy” process that requires logical leaps and conversely will involve large stretches of time where it feels like one is getting nowhere. And you might be right. The inherent newness of good research often means that other people’s opinions and experience aren’t great guides. So you end up needing to trust your logic and intuition that you are heading in the right direction and working on something worthwhile.