Abstract This paper questions the conventional wisdom that publication bias must result from the biased preferences of researchers. When readers only compare the number of positive and negative results of papers to make their decisions, even unbiased researchers will omit noisy null results and inflate some marginally insignificant estimates. Moreover, the equilibrium with such publication bias is socially optimal. The model predicts that published non-positive results are either precise null results or noisy but extreme negative results. This paper shows this prediction holds with some data, and proposes a new stem-based bias correction method that is robust to this and other publication selection processes.