Ethical and Legal Risks of Using Generative AI in Academic Research

By Dr. Vlad Krotov

Academic research has been transformed by the advent of Generative Artificial Intelligence (GAI). AI continues to expand the limits of what is possible for scholars as they utilize its capabilities to enhance their intellectual endeavors.  

Using GAI in academic research has its ethical and legal risks, however. There is still a large gray area when it comes to the legality and ethics of using GAI in academic research, which researchers, legal experts, and the general public need to clarify. To navigate these uncharted waters, the academic community must tread cautiously.

This article examines the ethical and legal risks associated with the use of GAI in academic research. Questions surrounding authorship, plagiarism, bias, transparency, and value of academic research dominate the ethical landscape. Data privacy, intellectual property rights, and copyright law are pressing legal concerns. This article explores these ethical and legal issues in more detail. 

This article is not intended to provide legal advice or to serve as a final judgment regarding what is illegal or unethical regarding GAI use. A licensed legal professional should be consulted if you need legal advice on GAI. This article merely aims to raise awareness about the legal and ethical risks associated with academics using GAI. 

Copyright

Copyrighted materials are used by some GAI tools to train their models. Using GAI to create your text may result in you using copyrighted content without giving the original sources proper attribution. In addition, if a substantial portion of your text has been written by GAI, then this text isn’t your own original and creative work. As a result, you may not be able to claim copyright for your own work. When you submit your article to a journal, the situation becomes even more complicated. When an article is published, most journals require authors to transfer copyright to them. Using GAI may not grant you complete ownership of the text, so you cannot transfer your copyright. Additionally, some of the ideas in your text may have been borrowed from another author. As a result of all these legal implications, some journals and conference proceedings decided not to review articles that contained AI-generated text. 

Data Privacy

Research shows that Internet users trust search engines like Google and Bing with their most intimate thoughts and intentions (for example, when researching a disease or finding information about someone they know). As part of their “prompts”, academics can submit confidential or copyrighted data to GAI tools. Generally, GAI tools outline how they use user data in their privacy policies, stating that they store user information for a specific period of time. If this is the case, then you may be handling confidential or copyrighted data in an unauthorized manner by using GAI tools.

Plagiarism

The fact that some GAI tools rely on copyrighted text to train their models is exacerbated by the fact that these GAI tools are not very meticulous about citing their sources. GAI obtains most of its “knowledge” from various Web sources. It is possible that you are using someone’s data or ideas without giving them proper attribution when you use text generated by GAI in your own work. In academia, this is considered to be a weak form of plagiarism. 

Authorship

It can be questioned whether you are the author of your work if substantial portions of it were written by GAI. It is possible to compare your current writing samples, generated by GAI, with your previous work using authorship attribution tools. Your doctoral degree and tenure can be revoked if it can be proven that you did not write much of your dissertation or academic articles used in your tenure portfolio. 

Quality

A number of GAI literature review tools are meticulous when it comes to citing sources. However, it is unclear how these sources are selected for generating text. For example, when “writing” a literature review, GAI tools often omit most of the “seminal papers” – they probably use whatever papers are available. Sometimes, GAI produces misleading or incorrect text (e.g., citing papers that do not exist or providing incorrect factual information). These issues in your work can easily be detected by a human journal reviewer who is knowledgeable about your topic. Your reputation as an expert in your field may be damaged by all these quality issues in your work. 

Bias and Discrimination

Humans are still superior to AI tools when it comes to emotional intelligence. Some GAI tools may be incapable of addressing sensitive topics appropriately. When dealing with sensitive topics, humans possess a degree of emotional and social intelligence that makes them more cautious and responsible. There are also mechanisms built into GAI tools to ensure sensitive topics are handled appropriately (for example, providing a lengthy disclaimer that the question is highly controversial and may have a range of strong opinions). In spite of this, GAI tools are not sufficiently intelligent to fully understand why their responses may upset some people. As a result, some of this “insensitive” text containing implicit or explicit discrimination or bias can find its way into your own papers and books. 

Academic Goodwill

In the era of GAI, school teachers and college instructors are increasingly edgy and suspicious of students’ work. In a few seconds, GAI can generate an essay or a detailed case study. AI-detection tools will not be able to detect plagiarism if the text is paraphrased and edited. The academic community will also become less trusting when evaluating someone’s work. When reading somebody’s article, reviewers and editors will keep wondering: “did this person write this text?”, “am I reading creative, original ideas or is it a useless article generated by AI for the purposes of getting another ‘hit’ required for tenure or promotion? In contrast to what most people think, AI text detection tools are more reliable and valid. Despite this, these tools can still lead to false accusations against authors based on text analysis. All these issues are likely to strain the relationships among academics, especially when a researcher is not transparent about his or her use of GAI.

Reputation of Academic Research

The GAI can further damage the reputation of academic research among the public. There is already criticism that academic research is “useless” in many fields. By using GAI, academic papers that merely reshuffle things around and don’t offer anything new are becoming increasingly easy to generate. Some of the most popular academic journals, which are already “flooded” with submissions, are likely to receive even more submissions in the future. This will undermine their article processing capabilities and make it harder for researchers to find “gems” in the ocean of submissions. Moreover, AI-based referencing tools make it easier to generate long lists of references and citations – something some academics view as “citations spam”.  This further undermines the reputation of academic research

Conclusion

To mitigate these ethical and legal risks, researchers must be vigilant in upholding existing ethical standards towards academic research, complying with existing legal frameworks related to privacy and copyright, ensuring transparency of their use of GAI, and critically evaluating the outputs of GAI models. Open dialogue among the members of the academic community as well as clear guidelines can help the academic community navigate the complex landscape of GAI-powered research while preserving the integrity and respect of scholarly work.