I love incremental papers
ยท 2 min read
The most common reason reviewers use to reject a paper (lazily) is "not enough novelty".
As a reviewer, I've never done that. I love incremental papers, under two conditions:
- the authors acknowledge previous work.
- the method actually works.
Most methods don't work. More precisely, most methods don't reproduce under non-identical conditions, especially non-incremental ones.
If a method doesn't have to reproduce, coming up with vastly novel, non-incremental ones is easy. We can sit down and come up with 100 exotic ideas in an afternoon (or ask an LLM to one-shot them) because the space of things that don't work is infinitely larger than the space of ideas that do work.
Bad incremental ideas are also easy to generate. But bad incremental work is at least grounded in components we understand, making it faster to diagnose and discard. Bad novel work can waste years of follow-up effort before the community realizes it doesn't reproduce. Those ideas don't promote progress, they harm it because they waste everyone's time working with/on them.
During my PhD, this frustrated me so much that I decided to write a paper about training methods that (mostly) don't work: No Train No Gain. While writing this paper was fun and satisfying, it was also easy to do because, again, most methods don't work.
Can we go further and argue that the breakthroughs that did work were often incremental?
Consider how some of the biggest breakthroughs in deep learning look when you decompose them:
- Adam -> Combination of RMSProp and SGD-M
- Transformers -> RNNsearch minus RNN
- GRPO -> PPO without the critic
Incremental work that reproduces is how the field moves forward.