See if you can get it on the paper”
There were several discussions of scientific “productivity” on Twitter yesterday. It’s long been clear to me that people have wildly different ideas about what this means and how to measure it. Many times you find people talking about how many papers a scientist has published, but does anyone seriously think that that is a useful number? One major factor is that individual researchers and communities have dramatically different ideas about what constitutes a publication unit. I remember being very annoyed when my first grant, which was directly based on my postdoctoral work, was reviewed with a ding that it was based on “a single publication.” Setting aside the fact that I didn’t invent a whole field and there was long literature preceding me, why is that in and of itself a neg? That was four years of work done entirely by me. I probably could have portioned out some number of smaller nuggets and published them separately, but why is that a good thing?
So I was interested in this exchange that came in a larger discussion of standards for review of NIH grants:
In a strict sense, Drug Monkey is right because science is never complete, but his argument is really a straw man. We can’t pretend that all papers are anything close to equal in terms of scientific productivity. And to head off an inevitable response, I am not talking about Glam. I am also not talking about middle vs. first author papers. It is absolutely the case that first author papers can reflect a wide range of what we deem to be productivity. In my opinion, at the extreme that range may even plausibly span an order of magnitude.
My attitude is that it is more efficient and better for science to publish your data in larger chunks, but I understand that many people feel differently. I’m interested in hearing from people in the comments. Given the same data, what is the argument for splitting it up? How do you know when to stop and publish something?