The blog Retractionwatch lifts up an interesting topic and an article arguing that the peer review process has trouble identifying “breakthrough” papers and quotes:
“[…]Of the 808 eventually published articles in our dataset, our three focal journals rejected many highly cited manuscripts, including the 14 most popular; roughly the top 2 percent. Of those 14 articles, 12 were desk-rejected. This finding raises concerns regarding whether peer review is ill-suited to recognize and gestate the most impactful ideas and research.”
(Siler, et al., 2015)
Whether citations should be a way to measure quality is another question. But if we put this quotation in context to Hjörlands article “Methods for evaluating information sources”, it seams there are several problems with the peer review praxis:
“Peer review is heavily discussed and criticized, and experiments such as [7–12] indicate that the reliability of the evaluations is low. “Chubin and Hackett  found in a survey of members of the Scientific Research Society that only 8% agreed that ‘peer review works well as it is’ (p. 192).” These criticisms are indeed serious, but the problem is that nobody has yet been able to suggest a better evaluation process that has won broad consent.” (Hjörland., 2012, p. 260)
Further he lifts up that reviewers are more likely to reject ideas that are contradictory to their own perspective. This might create dependence between the authors and the journal, putting more weight on the researchers information retrieval skills to be able to find different views on the research topic. Or as Hjörland concludes his text:
“Good, scholarly reading is to be aware of different perspectives, and to situate oneself among them.” (Hjörland ., 2012, p. 266)
So is peer review good enough; well I would say that there is room for improvement. But do we have a better alternative?
Hjørland, B., 2012. Methods for evaluating information sources: An annotated catalogue. Journal of Information Science, 38(3), pp. 258–268.
Siler, K., Lee, K. & Bero, L., 2015. Measuring the effectiveness of scientific gatekeeping. PNAS, 112(2), pp. 360-365.