Skip to main content

Peer Review and Impact Factors

April 15, 2014
hero-peer-review-and-impact-factors

Science, Nature, and Cell, The New England Journal of Medicine, The Lancet - these most prestigious of scientific and medical journals are published on a weekly basis, each week's issue brimming with amazing new discoveries claiming to expand the state of knowledge in their respective fields, or better yet, to shatter current paradigms and shift future research to a new direction. Yet not every published paper stands the test of time; few manage to actually shatter paradigms, and there are those whose results even fail to be replicated by other scientists. The process of peer review is the method most journals use to vet their papers, to try to ensure that the results they publish are correct more often than not.

It works like this: after years of toil by graduate students and postdocs, a lab head prepares a manuscript describing their hypothesis, the experimental methods they used to test the hypothesis, the results of those experiments, and their interpretations of those results. Sometimes results prove the hypothesis to be true, and sometimes to be false. Either way, the results often suggest avenues for future research. Then the researchers must choose a journal, and send their manuscript off to the editors.

If the paper is obviously terrible or fraudulent, the editors will reject it outright. And if it is obviously earth shattering - and has well-controlled experiments, and an argument that flows logically from the results - they will accept it immediately without reservation. Since in the real world neither of these things ever actually happens, editors usually send the paper out for peer review, asking two to four scientists familiar with the field their opinions of the paper.

These peer reviewers must assess if the experiments used were the most appropriate ones available to test the hypothesis in question; if the experiments were performed properly; if the authors' conclusions are consistent with the results obtained; and if the findings are significant - i.e. new and sexy - enough to warrant publication. Often, the reviewers will suggest that the authors modify wording, or perform additional experiments, before the paper is published. This back and forth can take up to a year. These reviewers are anonymous, so the authors don't get to engage with them directly. And the reviewers don't ultimately decide if the paper gets published; the editors of the journal make that decision, based on the reviewers' recommendations. If the paper is rejected, the authors are free to try the whole process again at a different journal.

Like most things in this world, peer review is not perfect. Reviewers must obviously be familiar with the topic at hand, so they are often colleagues - and can be competitors - of the researcher whose work they are reviewing. They can hold up the publication, or utilize the 'insider information' they glean from the paper to advance their own research. But on a less nefarious level, they are busy scientists who are not being compensated for their time reviewing this new paper, so it is often not their top priority. Nor have they had any training as to how to review a paper, since it is not built into science education. They also never get an assessment of their reviews, so they don't know if they were helpful or if they need to improve. And peer review is not designed to pick up fraud or plagiarism, so unless those are really egregious it usually doesn't.

Funding requests, like those submitted to RSRT, are subject to a very similar system. Just like journal editors, the people handing out research money rely on expert opinions to decide who gets how much. A grant is slightly trickier than a paper submitted for publication, though, because nobody knows a priori if the proposed experimental methods will work as hoped, or how significant the results might be. As mentioned above, these things are difficult enough for reviewers to assess once the results are in - and in a grant application, the experiments haven't even been done yet.
To minimize this risk RSRT employs a fastidious peer review. Reviewers are selected with painstaking attention to fields of expertise and potential conflicts of interest including philosophical or personality conflicts. Proposals are judged for relevancy to RSRT’s mission, scientific merits of proposed experiments and strength of the investigator.

There are stirrings of change to deal with these problems. Many scientists think that established journals have a chokehold on research by deciding what gets published, and are playing with a more open system whereby scientists publish their findings online - often for free, in contrast to traditional journals which charge a hefty fee for publishing a paper - where they are then subject to a more transparent post-publication peer review. Some examples are PLoSOne, BioMedCentral, and F1000Research. Other researchers think pre-publication reviews should be signed, so the reviewer has some accountability.

Forums that allow for ongoing critiquing of papers after publication are gaining momentum. Examples include PubMed Commons, PubPeer, Open Review. RSRT is a fan of post publication peer review and has long employed this approach to evaluate papers in the Rett field.

One way scientists assess the relative importance of an academic journal is by its impact factor, a way to measure a journal's prestige. It measures the average number of times recent articles published in the journal have been cited in a given time period, usually a year. Journals with higher impact factors - like those that began this piece - are deemed more important than those with lower ones. Impact factors have been published annually since 1975 for journals that are indexed in Journal Citation Reports and have been tracked by Thomson Reuters (ISI) for three years.

No scientific paper is intended as the be all and end all of truth. That is how the scientific method works, and where its beauty lies; each discovery is "true" only until new experimental evidence comes along that refutes it. Peer review cannot guarantee that a paper's results will be reinforced over time. But it does act as a gatekeeper or first responder, trying to ensure that papers that are published in scientific journals are experimentally and logically sound.

References/ Further reading

$40M