Peer review (as the publishers claim) is designed to assess the validity, quality and often the originality of articles for publication. Its ultimate purpose is to maintain the integrity of science by filtering out invalid or poor quality articles.
From a publisher’s perspective, peer review functions as a filter for content, directing better quality articles to better quality journals and so creating journal brands.
Different journals use different types of peer review. At least four main types of peer review processes are in vogue:
Single-Blind: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report.
Double-Blind: the reviewers do not know the names of the authors, and the authors do not know who reviewed their manuscript.
Open Peer: authors know who the reviewers are, and the reviewers know who the authors are. If the manuscript is accepted, the named reviewer reports are published alongside the article and the authors’ response to the reviewer.
Transparent Peer: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report. If the manuscript is accepted, the anonymous reviewer reports are published alongside the article and the authors’ response to the reviewer.
The peer review system is not without criticism. Studies show that even after peer review, some articles still contain inaccuracies and demonstrate that most rejected papers will go on to be published somewhere else.
Despite criticisms, peer review is still the only widely accepted method for research validation and has continued with relatively minor changes for some 350 years.
The impact factor (IF) is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times its articles are cited.
The calculation of Impact Factor is based on a two-year period and involves dividing the number of times articles were cited by the number of articles that are citable. To illustrate – Calculation of 2010 IF of a journal:
- A = the number of times articles published in 2008 and 2009 were cited by indexed journals during 2010.
- B = the total number of “citable items” published in 2008 and 2009.
- A/B = 2010 impact factor
If one publishes an article in a Journal with Impact Factor 2.0, then it must live up to the average citation performance of articles in that journal and it must therefore be cited at least twice in each of the succeeding years, or else, the article will contribute towards lowering of the IF of the journal.
The belief that peer-reviewed publications should be the metric for research success seems to be rooted in the following assertions that are accepted as truths:
- A peer-reviewed publication conveys more research authority because its findings have been vetted and then accepted by other members of the discipline.
- The peer-review process is anonymous, more competitive and (supposedly) more objective in its selection process
- Peer-reviewed publications are more prestigious and convey the expertise of the researcher.
These assertions are oversimplifications that erase the real nuance of peer-review publication. The peer-review process leaves the fate of someone’s research findings subject to the whims of two or three people who, like all of us, are influenced by variables including their own natural preferences for certain kinds of work. It is just a generalization to claim that peer-reviewed publications are always more selective or rigorous. (Admittedly, however, a peer-reviewed publication will almost always take longer to appear in print, which, for some people, adds to the genre’s perceived rigour.)
Peer-reviewed publications are simply not the only place where intellectual conversations are happening and where a researcher might want to share their ideas.
Over emphasis on Peer-Reviewed and high IF journals for publishing has resulted into more of replication and conformist research. Within my limited experience, I have found that most of the people who are creative types produce work that would not make sense in many of the “top” academic journals. They would therefore follow the path of Influencing without High IF Publishing through Usable, Useful and Timely Preprints and overcome the limitations inherent in High IF Publishing without Influencing.
A great research paper is not enough and it requires development, mobilisation, and exposure. A preprint is a version of a scientific manuscript posted on a public server prior to any formal peer review. Once it is posted, the preprint becomes a permanent part of the scientific record and is citable with its own unique DOI. By sharing ones research early, one can accelerate the speed at which science moves forward.
One Rewarding Story:
I am gratified with the fact that instead of waiting to publish my research in esoteric journals with high impact factors, which would have taken their own sweet time to convey their rejection or seeking revision or acceptance and publication, on MODELLING SPREAD OF CORONA VIRUS USING ADAPTED BASS MODEL (ResearchGate DOI: 10.13140/RG.2.2.30944.43522), I chose to publish it as a non-peer-reviewed, preprint under Creative Commons license CC BY-NC allowing others to remix, adapt, and build upon my work non-commercially, and although their new works must also acknowledge me and be non-commercial, they don’t have to license their derivative works on the same terms.
The benefit of giving free access to my Applied Managerial Research to others is that the same has been pursued, acknowledged, utilised and improved upon quickly and proactively. Here are the citations that I am now aware of (in the chronological order of their appearance and not conforming to citation styles):
- Zeny L Maureal, Jovelin Lapates, Madelaine S Dumandan, Vanda Kay B. Bicar and Derren Gaylo (of Bukidnon State University, Philippines) (August 2020) “Adapted Bass Diffusion Model for the Spread of COVID-19 in the Philippines: Implications to Interventions and Flattening the Curve” International Journal of Innovation, Creativity and Change. www.ijicc.net Volume 14, Issue 3, 2020
- Ted G Lewis (of Naval Postgraduate School, Monterey, CA, United States) and Waleed I. Al (of Bahrain Defence Ministry, Manama, Bahrain) (March 2021) “Predicting the Size and Duration of the COVID-19 Pandemic” Frontiers in Applied Mathematics and Statistics. Vol.6. DOI: 10.3389/fams.2020.611854
- Ted G Lewis (of Naval Postgraduate School, Monterey, CA, United States) (July 2021) “Emergence of Contagion Networks from Random Populations” Research Gate, Pre-print.
The above story has enthused me to do more creative and out-of-box research and not care too much about getting it endorsed and accepted by 2-3 unknown peers. I would prefer to share my research with the world without any delay and leave it to a more inclusive, extensive and democratic review by fellow researchers and audiences. I have been actively involved in delivering keynote addresses at non-academic conferences and writing hard-hitting op-ed piece that shapes public policy. These have often been contrarian approaches to conformist thinking.
The views of pure-applied research which I wish to lay before you have sprung from the soil of abstraction of observations into model-building and therein lies their strength. They are radical. Henceforth management-theory by itself, and management-practice by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality of managerial wisdom.
First Published 19 July 2021
“Likes” “Follows” “Shares” and “Comments” are welcome.
We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.