# True Peer Peer Review
At the end of the day review systems are about trust. You need to trust that what you’re reading is high quality and accurate so you don’t need to doubt every conclusion until you’ve picked over it with a fine-toothed comb.
The acknowledgement section of a piece of writing should be able to serve as peer review. It means that people you consider peers reviewed your paper. Then other people can decide whether they consider the people who reviewed your paper to be *their* peers. This process works well under three conditions. First, the community needs to be small enough that most people know other members. Second, there cannot be massive differences in respect levels. Third, people need to take reviewing seriously on both sides of the relationship and be willing to put time into calling out mistakes and fixing them. If there are large groups of people that other people don’t know about then you could run into a situation where a lot of people don’t consider the people you consider peers to be *their* peers. This same situation could occur if there were massive differences in respect levels - you could end up having groups of ‘second class citizens.’ If people don’t take reviewing seriously on both sides, then it just devolves to endorsements on book jackets.
Computers may be able to relax the small community requirement. [[Trust is transitive]] so if a system can keep track of who you consider to be your peers and who your peers consider to be *their* peers, you could figure out whether you consider the paper to have been reviewed by your peers. You could even maintain anonymity and just show trust scores.
Preprints and Twitter ([[Can Twitter Save Science?]]) have started to informally implement this system - I trust a machine learning paper on [[arXive]] because [[Andrej Karpathy]] or [[David Ha]] point it out more than if it’s in Nature. However, I run into many works with no sense of what my peers think of it. So I am forced to trust either the authors themselves, their institutions, or the publisher.
Managing trust transitivity was historically the job of the editor. At first, the editor was the only reviewer and everybody trusted the editor. When the number of submitted papers became too large for a single editor to handle, the editor then started delegating to members of the community. This delegation worked because [[Trust is transitive]], creating the modern peer review system. However, [[Peer review has failed as an institution]].
How could we bootstrap a true peer peer review system without needing to create some blockchain powered P=NP monstrosity? The small note at the end of essays with “thanks to X, Y, and Z for the feedback” is a good start. Culturally, if someone asks you to look over a piece, take it seriously. And at the same time, take feedback seriously and only ask for it if you’ll take it seriously. Speculatively, linking to the reviewer’s twitter profiles could bootstrap transitive trust because it enables people to see whether anybody they follow follows the reviewers. Of course, this puts more cultural strain on the twitter following system than it was meant to bear.
### Related
* [[Centralization and Decentralization]]