# How many important things barely squeezed through with statistics?
A lot of the discourse around the replication crisis and the sickness of science in general revolves around people looking for results that statistically fall just below the p=0.05 (“p-hacking”)
P corresponds to the chance that the result will occur purely by chance in a population. However, it doesn’t address the magnitude of the effect. How big are the actual effects that are being measured in these experiments? For example, studies that say “drinking red wine increases longevity” - there is a huge difference between saying that drinking red wine increases lifespans on average by a year and increases them by a day.
I’m interested in examples where a very ‘big’ effect like red wine increasing lifespan by a year needed to be justified by statistics. The hypothesis is that all of this hubbub is fighting over scraps - whether there is an effect is almost irrelevant because
At the same time, these statistics could be clutch to discovering new particles so I don’t want to dismiss it entirely
### Related
- [[How many undetected frauds in science]]
<!-- #questions -->
[Web URL for this note](http://notes.benjaminreinhardt.com/How+many+important+things+barely+squeezed+through+with+statistics)
[Comment on this note](http://via.hypothes.is/http://notes.benjaminreinhardt.com/How+many+important+things+barely+squeezed+through+with+statistics)
<!-- {BearID:ED4DD068-76E3-48DD-9E66-7A3354505BFE-51727-000AF2F4CFCCDC8C} -->