Politics in peer review- Will artificial intelligence ever override the human element?

The peer review process has its flaws. Could artificial intelligence do a better job than humans?
Published in Physics
Politics in peer review- Will artificial intelligence ever override the human element?
Like

Grant funding, fellowships, and even job opportunities are largely based on the volume of publications in high quality journals. The peer review process plays a key role to improve the quality of manuscripts, maintain journal integrity, and ensure that manuscripts fit the particular journal’s scope. Careful and detailed reviews can significantly improve the quality of the published research and identify new avenues for future research. However, a large portion of peer review is subjective, and reviewers are often demonized as evil gatekeepers. Would artificial intelligence bring greater objectivity to peer review?

 

The need for more objectivity in peer review:

Personal bias is often an unfortunate, conspicuous presence in the peer review process. Reviewers may be more critical of manuscripts from competing labs or those with opposing philosophies. On the other hand, some reviewers may show favoritism towards their friends, potential future collaborators, or investigators in their grant study section. The structure of the peer review attempts to diffuse any prejudice or nepotism. Typically, multiple peer reviewers are assigned to a manuscript, and authors are allowed to make recommendations for reviewers to include or exclude. Ultimately, an editor presides over the peer review process, moderates the reviewers’ comments, and can supersede judgment by the reviewers in cases where there is perceived bias or conflicting recommendations.

 

Variables for a reviewing algorithm:

Artificial intelligence (AI) is based on computer algorithms designed to take action to achieve a specified goal. As the program experiences feedback overtime, the algorithm is refined as a way to ‘learn through experience’. First, we would need to define the limits of what is an acceptable manuscript versus unacceptable manuscript for publication. A computer algorithm could use statistical measures to assess the validity of experiments, identify weaknesses within experimental data, as well as assess syntax and grammar. Eventually AI could learn to identify argument logic within a manuscript to assess whether the hypothesis is well-supported. The program could tabulate the use of scientific jargon and eventually adopt a “scoring system” to quantify the parts of a solid, scientific study. Over time, AI could integrate all of these variables and refine the parameters. Would this be enough to isolate “good” from “bad” papers?

 

Limitations of artificial intelligence:

One limitation is that the computer algorithm will inherently contain some sort of bias. The algorithm may create a score based on weighted averages of various criteria. The program will have to determine which criteria should be weighted more heavily than other parameters. Although statistically speaking, a manuscript may receive a high score based on the weighted averages, but that does not necessarily mean that the content is more meaningful- A quality paper is greater than a sum of its parts.

The quality of a manuscript also depends on the degree of innovation, significance, and potential impact to the field. These characteristics aren’t easily quantified. Additionally, AI would have a limited ability to provide constructive feedback. Expert reviewers have knowledge of the historical context of a manuscript and can provide helpful insight into how to elevate the quality of a paper. They can identify creative strategies to strengthen a hypothesis, expand future directions, and bolster collaborations between seemingly divergent fields of study or technologies. Thoughtful peer reviews might also identify potential transfer options when the scope of a paper doesn’t align with that of the journal.

 

Importance of the human engagement:

If a computer algorithm can be designed to review manuscripts, the next logical step would be for computer algorithms to write manuscripts. Although there is a logical flow to a good scientific paper, there is also an artistic element of story-telling. A manuscript isn’t just a composition of data, statistical analysis, and conclusions. A good story navigates historical context and maneuvers the reader through evidence, ultimately convincing the reader to continue on the journey through the paper and reach the same conclusions as the author.

Editorial judgments are not simply a matter of tallying votes. Sometimes it includes quite a bit of creativity and emotional intelligence that is not easily programmable.

 

 

Future discussion:

Please share your thought! Would the peer review process benefit from artificial intelligence? Further, how much of the human aspect should be taken over by computers? How can we implement more rigorous guidelines for reviewers but still encourage experts to find the time?

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Go to the profile of Marie-Elizabeth Barabas
over 7 years ago
Please share your thought! Would the peer review process benefit from artificial intelligence? Further, how much of the human aspect should be taken over by computers? How can we implement more rigorous guidelines for reviewers but still encourage experts to find the time?