The best people do their best work when they are properly compensated. If a market does not compensate experts according to their special talents and efforts, then the market, however entrenched, must be reconfigured. The market currently in need of a fix is the one that assures the quality of scientific research (i.e., “peer review”).
As scientists and professors at large research universities, my colleagues and I spend a colossal amount of time doing things that everyone wants but no one wants to pay for. One of the most important but also most time-consuming is peer review. The efficient delivery of expert peer review makes possible the dissemination of new knowledge, new technology, and new medications. The product of peer review is a formal vetting of new scientific ideas. But the market, as traditionally configured in academia, is not delivering the highest quality product. This inefficiency is hurting science, enabling duplication and even fraud, and ultimately wasting taxpayer money.
The most experienced reviewers are opting out, thus raising the chances that faulty results that cannot be replicated will be published.
During peer review, new scientific work is (anonymously) evaluated for merit by experts in the appropriate scientific discipline (“peers”). This maintains the integrity of the scientific enterprise. To make it all possible, scientists are continually enlisted to review scientific articles submitted for publication in journals and to review proposals for funding by government or private agencies. Serving as a reviewer is a solemn responsibility for any scientist. It’s prestigious to be asked and it can be beneficial for the reviewer. When done properly, it takes time, talent, and concentration. One must be part auditor, part medical examiner, and part theater critic.
When experts review a manuscript, they must draw on their knowledge of the relevant literature as a “smell test” to separate the good from the bad. Were experimental procedures carried out according to best practices? Is the math correct? Is the result truly new? (The world does not need endless copycats reporting that a feather and a cannonball fall with the same gravitational acceleration.) Is the interpretation plausible given present knowledge? Have the authors told a compelling story? When reviewers consider a proposal for funding, they are looking for something new (“innovative”) but also ideas that will move knowledge forward and spur future developments in adjacent fields (“impact”).
A well-seasoned scientist is best positioned to place a manuscript or proposal in the proper context. Unfortunately, the most experienced scientists are also the ones with the most other duties. They don’t have any time left to review papers or grants. There is little incentive to do so. My friends who are editors are beside themselves. One told me point-blank that he had “scratched [me and my colleagues] from his Rolodex” because we no longer even responded to his requests to review manuscripts for the journal he edits. I plead guilty as charged.
Can’t junior faculty pick up the slack? Not entirely. They may flag improper methods or faulty math. But they may lack the perspective and breadth of knowledge that senior experts rely on to weed out the good from the bad or the truly ugly. In my years of reviewing, I have seen and rejected many claims of primacy (“We are the first to drop a blue feather and a red cannonball simultaneously”). These are silly and annoying. But I have also seen authors actively deny the documented history of my field. This comes from laziness or a desire to accrue undue credit to themselves. I have encountered grant applicants who ignore the results of their own earlier work when it conflicts with their present hypotheses. Worst of all, I have reviewed manuscripts that contain previously published images flipped and relabeled as something new. I catch these offenses (from the silly to the fraudulent) because I have been around the block a few times and am steeped in the literature of my field. There’s something to be said for experience.
If the pool of manuscript reviewers is too green, then at a minimum, the scientific literature gets polluted with reruns that lead researchers down the same dead ends that have already been explored (all science is cumulative.) If the pool of grant reviewers is too junior, taxpayer and philanthropy dollars may go to projects that have already been tried and failed. The market for peer reviewers must bring senior experts back into the fold. Playing on prestige and obligation can take us only so far.
How to persuade more senior scientists to stay in the Rolodex? Pay them. What? Wouldn’t that mean authors will have to pay fees to have their manuscripts reviewed? Yes, but they already do. Journals already charge fees. They make a profit while the reviewers get bupkes. It’s time to reconfigure the market. Raise the publication fees and pay some of it to the people who do the heavy lifting. Even a modest amount, say $500 per reviewer per paper, would not raise the publication costs unreasonably (they hover around $4,000) but it would incentivize senior reviewers to jump back into the game.
There is nothing inherently wrong or unethical with paying reviewers. I often get paid an honorarium when I accept an invitation to lecture at a university or conference. It is an acknowledgment of my time, effort, and expertise. It doesn’t cause me to change what I say. Would this upset the applecart of free peer review? Yes. But conditions change. Supply and demand are not static. Peanuts aren’t free on airplanes anymore. Life goes on.
Is there a precedent for paying for reviews? Yes. For many years, there have been professional services staffed by experts who review experimental protocols involving human subjects for adherence to ethical and regulatory guidelines. We call them professional Institutional Review Boards (IRBs) and their work products are widely accepted by universities and hospitals. Professional IRBs are for-profit entities. The quality of their reviews is comparable to home-based IRBs and their turnaround time is shorter. If for-profit journals paid reviewers, the average quality of reviews would be higher and the turnaround time might be shorter.
Grant reviewing is different. Paying academics to review applications to government funding agencies (e.g., NIH, NSF) or foundations (e.g., the Michael J Fox Foundation) might require a different model. Unlike the journals, funding agencies are not in it to make a profit on grant reviews. Nevertheless, a relevant model might exist. To burnish their images, law firms often cover the non-billable hours of attorneys to do good works for indigent or nonprofit clients. It’s called “pro bono.” Perhaps universities, whose images are greatly enhanced by the considerable pro bono efforts of their faculty, ought to cover some of their professors’ non-billable hours spent reviewing.
Science and the public are ill-served when the pool of peer-reviewers skews too young and inexperienced. Right now, the system is broken. The most experienced reviewers are opting out, thus raising the chances that faulty results that cannot be replicated will get published or funded. Let’s entice the best peer-reviews back into the market by paying them for their unique knowledge sets borne of long careers, and their hard work. You get what you pay for.