“Novelty” is one of the constant criteria by which academic research is judged, both in terms of funding and publication.
Whether something is novel enough not purely about the magnitude of the contribution (the characterization of reactions that barely differ from other reactions can be considered novel) or the actual newness of the idea (If I argued that the transition between turbulent and laminar flow was mediated by conscious 11-dimensional elves it would be a new idea, but not in the good way.) Instead, the worthy novelty of a contribution revolves around its relationship to existing paradigms. Paradigms outline a set of questions and the rules around answering them.
A contribution has massive headwinds if the paradigm judges the question being answered to be already answered or out-of-scope or the answer is not in an approved form (the 11-dimensional elves again.)
It’s easy to say “argh! The system is full of risk aversion, politics, and narrow mindedness!” But Most people are not malicious or stupid and that goes for academics as well. Most ideas that answer old questions in new ways or have completely different answers are crackpot ideas (11-dimensional ideas!) so it’s not irrational to default dismiss them given limited time and bandwidth^1.
If a question has existed unanswered for a long time, inevitably many people have tried to answer it. Every failed answer rationally increases the priors that the next answer will also fail. Therefore it’s perfectly reasonable prior that a new answer to an old question will also fail. It’s also technically correct that proposing a new answer to an old question is not “novel” creating a rational excuse for people not to waste their time with crackpot answers. However, the novelty criteria become a cudgel when people use it to avoid spending the time to actually engaging with an idea. The problem is that it’s not actually clear where to set the filter between “take every crackpot idea seriously” and “reject every answer to long-standing problems.”
The incentives around new answers to old questions also apply to the creation of new questions. Academia pushes against the creation of new questions. New questions cannot be measured by their performance on standard metrics or data sets. At least in early 21st century machine learning, it’s easier to get a paper accepted by improving performance on a standard dataset from 98.2% to 9.85% than it is to argue that your system is doing something important independent of an established test or heaven forbid has performance that can’t be measured numerically. Again, I suspect this situation at least in part arises from the number of people all clamoring for attention and everyone’s limited bandwidth Attention is the ultimate scarce resource. New questions can also be completely useless. One out of a thousand times “What if we ran it backwards?” Is the key to saving the universe, but most of the time it’s someone trying to be clever with a bad idea.
These two situations parallel startups heavily. It’s rational to have a prior that a company trying ideas that are littered with the bodies of other companies (memexes, new social networks, elder care, etc.) will fail. However, It’s rational to try something that failed in the past as long as you explicitly call out why you will succeed when they failed but that requires time and precision from both the founders and appraiser. In parallel with new ideas in science, startups trying to introduce a product that “nobody knew they wanted” usually fail as well.
The upshot is there are good reasons to dismiss new ideas that are both novel and not novel. Therefore “novelty” is a relatively arbitrary idea discriminator and easy for people to use to dismiss ideas they don’t want to engage with.
^1: The coupling between bandwidth and ability to address crackpot ideas on a case-by-case basis of course plays into my bias that insufficient Slack - concept is a core source of friction in the system.