1

Will ChatGPT Disrupt Peer Review? Impact of AI on the Hallmark of Science Vigilance

For how long would it require for ChatGPT to challenge the hallowed tradition of scholarly scrutiny and validation — the peer review system? Never!

While there is a constant debate surrounding whether artificial intelligence (AI) is capable of fully replacing human peer reviewers or not, it is being used to assist with certain aspects of the peer review process. For example, some publishers are using AI tools to screen submissions for plagiarism or to identify potential conflicts of interest. Other publishers are using AI to help identify suitable reviewers based on their areas of expertise and past performance.

Peer review is not immune to scientific misconduct since some reviewers may distort study findings intentionally or accidentally. Because of these constraints, several academics are investigating the use of AI tools and chatbots, such as ChatGPT, to aid in the peer review process. However, the extent to which AI can completely replace human reviewers still remains a subject of debate.

The STM industry focuses on making an informed judgment about its use in peer review and establishing some policies. Wiley, Hindawi’s parent company, found over 1,200 papers with altered peer reviews, largely from special issues, to be retracted in order to combat scientific misconduct and assure ethical scholarly publishing. This comes after the retraction of 511 articles from 16 journals in 2021 for the same reason. The discovery was made as a result of the same research that resulted in the initial retraction.

Could ChatGPT-generated Text Deceive Reviewers and Readers?

ChatGPT raises concerns about its potential to deceive readers and reviewers. As AI algorithms improve, it is critical to study whether the use of ChatGPT-generated text can result in biased or incorrect information and how to prevent these difficulties. The use of ChatGPT-generated text in scientific research publishing and other fields has the potential to deceive reviewers and readers in various ways, including through plagiarism, bias and misinformation, impersonation, spam and scams, and fake reviews.

  • Plagiarism: AI algorithms can be trained to generate text that closely resembles existing content, which can be used to pass off as original work and deceive reviewers and readers. This type of plagiarism can be difficult to detect, as the ChatGPT -generated text may be slightly altered from the original work, giving the impression that it is unique.
  • Bias and Misinformation: AI algorithms can be trained on biased or inaccurate data, which can lead reviewers and readers to be unaware that the information they are reviewing or reading is biased or incorrect. This can have significant implications for the credibility of scientific research and other fields.
  • Impersonation: ChatGPT can be used to generate text that mimics a specific individual’s or group’s writing style, allowing for impersonation and deceit. This can lead to critics and readers believing that a work was authored by a certain individual or group when, in fact, it was not.
  • Spam and Scams: Since ChatGPT -generated text can be used to generate large volumes of low-quality or spam content, it can be used to deceive readers and drive traffic to particular websites. This type of content can be used to promote scams or spread false information, leading to negative implications for both individuals and society as a whole.
  • Fake Reviews: Text generated with ChatGPT assistance can be used to generate fake reviews for products or services, leading consumers to make purchasing decisions based on false information. This can have significant implications for businesses, as well as for the individuals who rely on accurate reviews to make informed decisions.

ChatGPT-generated text in scientific research publishing and other fields requires careful consideration and verification to ensure that the information being presented is accurate and trustworthy. Reviewers and readers must remain vigilant in detecting potential instances of deception and taking steps to mitigate the risks associated with the use of AI-generated text.

AI in the Current Peer Review System

AI has traversed its way into the peer review system like many other processes. One of the examples of this is Raxter.io, a web-based platform that provides a range of features designed to assist reviewers in providing feedback to authors. The platform uses AI algorithms to analyze manuscripts and identify potential issues, such as inconsistent formatting, unclear language, and incomplete references. Additionally, it provides suggestions for how to improve the manuscript by restructuring paragraphs or adding additional references.

The use of AI in peer review is still in its early stages, but it has the potential to significantly improve the efficiency and accuracy of the peer review process. Review assistant tools like Raxter.io and others are just the beginning of what is sure to be a rapidly evolving field.

Can ChatGPT and Other AI Tools Replace Human Reviewers?

In simple words, ChatGPT Cannot Replace Human Reviewers!

Here are the potential risks associated with the use of ChatGPT as peer reviewers. Some of these risks include:

Lack of transparency: AI algorithms can be complex and difficult to understand, which can make it difficult to know how decisions are being made. This lack of transparency can make it challenging to identify and address potential biases or errors in the algorithm.

Over-reliance on AI: There is a risk that editors and reviewers may become overly reliant on ChatGPT and other AI tools, and fail to exercise their own judgment and expertise. This could lead to important scientific insights being missed or overlooked.

Technical issues: These tools can be prone to technical issues, such as errors in the algorithm or issues with the software. These technical issues could impact the accuracy and reliability of the peer review process.

Data privacy concerns: AI tools like ChatGPT and others rely on data, including personal data about authors and reviewers. There is a risk that this data could be misused or mishandled, leading to privacy concerns.

Unintended consequences: There is a risk that ChatGPT and other AI tools could have unintended consequences, such as perpetuating biases or leading to a reduction in the quality of feedback provided to authors.

These risks are not unique to ChatGPT or other AI tools and are also present in other aspects of the peer review process. However, it is essential that editors and peer reviewers are aware of these risks and take steps to mitigate them to ensure that the peer review process remains fair, transparent, and accurate.

Deciphering and Mitigating Data Privacy and Security Concerns Associated With ChatGPT-assisted Peer Reviewing

Like other AI technologies, ChatGPT relies on data, including personal information about authors and reviewers, and there is a possibility that this data will be misused or mismanaged, raising issues about privacy and security.

One concern is the risk of data breaches. If the data collected by ChatGPT is not properly secured, it could be vulnerable to hacking or other forms of unauthorized access, leading to the exposure of sensitive personal data.

Another concern is the risk of unintended data use. ChatGPT can collect large amounts of data, and there is a risk that this data could be used for unintended purposes, such as identifying potential conflicts of interest or biases, or even for commercial purposes.

Additionally, there is a risk that AI tools could perpetuate biases in the data. ChatGPT algorithms are only as good as the data they are trained on, and if the data is biased, the algorithm could also be biased. This could lead to biased recommendations or decisions, which could have significant consequences for authors and reviewers.

To address these issues, editors and peer reviewers must ensure that any AI technologies they use, including ChatGPT, are consistent with data privacy and security legislation, and that suitable measures are in place to protect sensitive personal data. This may involve the use of secure servers, encryption, and access restrictions to guarantee that only authorized personnel have access to the data.

Before collecting or using authors’ or reviewers’ data for any purpose, editors and peer reviewers should be clear with them about how their data will be used and acquire their explicit approval. This might involve laying up explicit privacy regulations and detailing how data will be used and preserved.

The Future of Peer Review Is Human-Machine Hybrid-augmented Intelligence!

AI tools can assist with certain aspects of the review process, such as identifying potential conflicts of interest or language biases, but they cannot replace the expertise and judgment of human reviewers.

Human reviewers bring a level of subject matter expertise and critical thinking that cannot be replicated by AI tools. They can identify important scientific insights, evaluate the quality of the research methodology, and provide nuanced feedback that goes beyond what an algorithm can offer.

Additionally, human reviewers can take into account the broader context of the research, such as the relevance of the research to the field, the potential impact of the research, and the potential ethical implications of the research.

Furthermore, the peer review process is not just about evaluating the quality of the research, but also about providing feedback and guidance to authors to help them improve the quality of their manuscripts. Human reviewers can provide personalized feedback that takes into account the specific strengths and weaknesses of each manuscript, helping authors to refine their research and improve the quality of their writing.

Key Takeaway!

The integration of AI tools into the peer review process can be beneficial in assisting with certain tasks such as language editing and conflict of interest detection. However, the use of AI tools must be continually evaluated and responsibly implemented to ensure that they are not perpetuating biases or impacting the quality and reliability of scholarly literature. The expertise and judgment of human reviewers will always be essential in ensuring the rigor and dependability of the peer review process, and the continued integration of AI tools should be viewed as a complementary tool rather than a replacement.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers Poll

    According to you, how can one ensure ethical compliance in research and academia?