Scooped! Estimating Rewards for Priority in Science (with Carolyn Stein) Job Market Paper.
Abstract: The scientific community assigns credit or “priority” to individuals who publish an important discovery first. We examine the impact of losing a priority race (colloquially known as getting “scooped”) on subsequent publication and career outcomes. To do so, we take advantage of data from structural biology where the nature of the scientific process together with the Protein Data Bank — a repository of standardized research discoveries — enables us to identify priority races and their outcomes. We find that race winners receive more attention than losers, but that these contests are not winner-take-all. Scooped teams are 2.5 percent less likely to publish, are 18 percent less likely to appear in a top-10 journal, and receive 28 percent fewer citations. As a share of total citations, we estimate that scooped papers receive a credit share of 42 percent. This is larger than the theoretical benchmark of zero percent suggested by classic models of innovation races. We conduct a survey of structural biologists which suggests that active scientists are more pessimistic about the cost of getting scooped than can be justified by the data. Much of the citation effect can be explained by journal placement, suggesting editors and reviewers are key arbiters of academic priority. Getting scooped has only modest effects on academic careers. Finally, we present a simple model of statistical discrimination in academic attention to explain how the priority reward system reinforces inequality in science, and document empirical evidence consistent with our model. On the whole, these estimates inform both theoretical models of innovation races and suggest opportunities to re-evaluate the policies and institutions that affect credit allocation in science.
Searching for Superstars: Research Risk and Talent Discovery in Astronomy
Abstract: What is the role of luck in the careers of scientists? Since the production of science is inherently risky, the allocation of resources, promotions, and publications may be based on noisy signals of ability. Therefore, success might be path dependent, such that lucky breaks early in the career are amplified into future recognition and opportunities. I seek to quantify the short- and long- run effects of exogenous project success and failure in the context of academic astronomy. Using weather conditions during telescope viewing sessions, I test whether project-level shocks have a lasting effect on publication and citation rates. I find that idiosyncratic weather quality increases publication and citation rates for novice astronomers but does not affect the productivity of veteran astronomers. Good weather shocks increase the number of future telescope sessions novices are awarded, suggesting that lucky breaks may improve early-career opportunities. However, these positive effects on productivity are transient, lasting about four years before diminishing. Receiving a good weather shock has no detectable effect on long-run productivity or the probability of staying in academia.
Research in Progress
Competition and Quality in Science (with Carolyn Stein)
Abstract: We study how competition to publish first and establish priority may impact the quality of scientific research. First, we develop a model where scientists decide how long to work on a given project. Scientists trade off the marginal benefit of higher quality research against the marginal risk of being scooped. More competition encourages scientists to rush and release lower quality work. In particular, our model suggests that the most important (highest potential) projects are executed with the lowest quality. We test our model using project-level data from the Protein Data Bank (PDB), a repository for the structures of large macro-molecules. An important feature of the PDB is that it assigns objective measures of project quality. Consistent with our model, we find that projects with the most ex-ante potential are completed with the lowest ex-post quality. We conclude by considering the welfare implications of competition in science when the quality of published findings can vary.