1 / 29

Adventures in peer review

Adventures in peer review. Pippa Smart PSP Consulting Karachi, December 2010. It doesn ’ t find errors. So, what ’ s wrong with peer review. It doesn ’ t spot plagiarism. Its hard to find good reviewers. Where do I start …. A lot of effort for little return?. It delays publication.

birch
Download Presentation

Adventures in peer review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adventures in peer review Pippa Smart PSP Consulting Karachi, December 2010

  2. It doesn’t find errors So, what’s wrong with peer review It doesn’t spot plagiarism Its hard to find good reviewers • Where do I start … A lot of effort for little return? It delays publication Reviewers are biased Poor quality comment Good reviewers get overloaded Not enough reviewers It is censorship of new ideas

  3. it is slow, expensive, largely a lottery, poor at detecting errors and fraud, anti-innovatory, biased, and prone to abuse Richard Smith, BMJ Blogs, March 22, 2010

  4. Does anyone have a good word …

  5. Researchers complain … 2009 survey of 4000 researchers: • Technology has improved things, but adequate guidance is missing • Why don’t journals provide better guidance? • Journals ask the wrong people • Why don’t they know who has adequate subject knowledge? (www.Senseaboutscience.org)

  6. … and reviewers disagree with each other … • Study of 2264 papers receiving 5881 reviews from Journal of General Internal Medicine • Reviewers agreement was little more than chance • Editors continue to place considerable weight on reviewers' recommendations

  7. Why review at all?

  8. “Peer review is neither necessary nor sufficient for scientific progress” • “Journal review procedures are merely a means to the end, and the end is a journal that serves a useful function in the dynamic process of science” (Ian Charlton, ex-editor of Medical Hypotheses)

  9. However …. • … the editor left amid a storm over two papers on AIDS … • … and peer review was introduced … “Medical Hypotheses is now dead: killedby Elsevier 11 May 2010. RIP.”Bruce Charlton

  10. Leave it to the readers?

  11. “Lightweight” review • PLoS • “PLoS ONE will … publish all papers that are judged to be technically sound” • Why? • “Too often a journal's decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership …” • IF = 4.351, rejection rate c.30%, c.14,000 submissions in 2010, 14,250 articles published by 18 November 2010

  12. Other supporters … • BMC Research Notes • “Publication … is dependent primarily on the [article’s] validity and coherence and whether the writing is comprehensible” • Why? • “…to reduce the loss suffered by the research community when results remain unpublished because they do not form a sufficiently complete story to justify the publication of a full research article” (280 articles since launch in January 2010)

  13. Provide metrics so readers can evaluate and grade

  14. Post-publication review is spotty, unreliable, and may suffer from cronyism Phil Davis, Scholarly Kitchen Blog July 14, 2010

  15. So how do we solve this?

  16. # 1: It takes too long … • Elsevier three-tier publication model • On acceptance – 'Article in Press' • When finalised – ‘Issue in Progress' • Publication – by issue • BMC article-by-article publishing • No delay for issue, published as accepted • ArXiv.org and early drafts in repositories • Upload working paper, submission and pre-print

  17. # 2: It's hard to find (good) reviewers • Built-in search tools: Elsevier provide editors with Scopus search • Guidelines, training - BMJ • PeerChoice for Chemical Physics Letters • Registered users alerted to new articles • Users download the articles with a promise to review them • Still in trial • Get the authors to do the hard work • Biology Direct: • Authors select 3 reviewers from the Editorial Board • Reviewers have 72 hours to agree to review or not • Agreement tacitly implies acceptance (with revisions)

  18. # 3: Avoiding “cronyism” • Biology Direct • Reviewer-author pairing – no more than 4 articles each year • No more than 2 articles per year with the same three reviewers

  19. # 4: Protecting against bias and censorship • Atmospheric Chemistry and Physics • Combined closed and open review • EMBO and Biology Direct • Published reviewer reports • BMJ • Make it all open (to author and reviewer, not reader) • Journal of Medical Internet Research • Reviewer names published under each accepted article

  20. # 4: Protecting against bias and censorship • Atmospheric Chemistry and Physics • Combined closed and open review • EMBO and Biology Direct • Published reviewer reports • BMJ • Make it all open (to author and reviewer, not reader) • Journal of Medical Internet Research • Reviewer names published under each accepted article Doesn’t that just change the problem?

  21. # 5: Minimising reviewer overload • Nature Publishing Group • Simplified method to redirect rejected manuscripts to other Nature journals • Nature Communications bottom of the chain • BioMed Central • Method to cascade articles from high-ranking journals to others – sharing reviews

  22. Is anybody being more adventurous ?

  23. Neuroscience consortium • http://nprc.incf.org/ • Launched January 2008 • 42 participating journals (and growing) • Sharing reviews if article rejected by one journal and submitted to another • Voluntary, confidential, no central organisation • Slow take-up

  24. J. Neurophysiology J. Neuroscience Neuroscience consortium • J Neuroscience • Largest journal • Sends on c.5% rejections (c.175) articles per year • J Neurophysiology • Receives c.5% articles • Sent on only 8 articles • J Comparative Neurology • Receives 3-5% articles • Sends on 1-2% articles

  25. Positive effects? • J Neurophysiology • 10% articles received accepted with no re-review • (cf <1% of new submissions) • 80% redirects accepted • (cf. 45% of new submissions) • Time reduced from 82 to 73 days • Many articles re-reviewed and/or revised

  26. Open and closed reviewing:Atmospheric Chemistry and Physics

  27. Positive effects? • c.700 articles published each year • Each article receives 4-5 reviewer comments • 1 in 4 articles receive a public comment • 3 of 4 reviewer comments are posted anonymously • Rejection rates of 10-20% • Better quality submissions and greater evolution of articles • Journal growth 20-50% articles each year • 2010 IF 4.881

  28. In summary ? • More options for peer review • More experimentation • More acceptance of different methodologies

  29. Thank you! Any questions? Pippa Smart Pippa.smart@gmail.com

More Related