Policy for Ethical Use of Conversational Generative AI Chatbots in Student Assignments

1. Introduction:

This policy serves as a framework for the ethical use of Conversational Generative AI Chatbots (GAI Chatbots) by students in their assignments. It aims to strike a balance between harnessing the potential of these tools for learning and research while upholding academic integrity and ethical conduct. Students are encouraged to embrace the educational value of GAI Chatbots while respecting the principles outlined in this policy.

2. Purpose:

The purpose of this policy is to ensure that students employ GAI Chatbots in a responsible and ethical manner that does not impede their academic progress and fosters the development of critical thinking and problem-solving skills.

3. Responsible Use:

Students using GAI Chatbots should adhere to the following principles of responsible use:

a. Academic Honesty: Students are expected to uphold academic integrity and honesty in their assignments. The use of GAI Chatbots should not involve plagiarism or submitting work as their own if it is primarily generated by the chatbot.

b. Proper Attribution: If students utilize GAI Chatbots to generate content, they should appropriately attribute the assistance provided by the chatbot and make it clear that the content is generated with the help of AI.

c. Independent Learning: GAI Chatbots can serve as educational aids, but students should not overly rely on them to complete assignments. The primary goal is for students to develop their own skills and understanding.

4. Privacy and Data Security:

Students should ensure that they do not share personal or sensitive information while interacting with GAI Chatbots. They should also be cautious when using chatbots that require data input, making sure they comply with privacy regulations and institution-specific policies.

5. Contextual Applicability:

Students should evaluate the context in which they use GAI Chatbots. These tools may be more suitable for certain assignments and less so for others. Students should assess whether using a GAI Chatbot aligns with the learning objectives of their assignment.

6. Critical Thinking and Review:

Students should critically review and assess the output generated by GAI Chatbots. They are encouraged to verify and refine the content provided by the chatbot to ensure its accuracy and relevance to the assignment.

7. Collaboration and Peer Review:

Students are encouraged to collaborate with peers and instructors to review and discuss the content generated by GAI Chatbots. Peer review and discussion can help improve the quality of work and reinforce ethical practices.

8. Compliance with Academic Institution Policies:

Students must adhere to the academic policies and guidelines of their respective institution regarding the use of AI tools in assignments and should be responsible for utilizing the GAI Chatbot according to these policies. This includes understanding any specific rules or expectations related to GAI Chatbot use provided by instructors for specific courses and assignments. This policy should not override any of the GAI-related rules and instructions set forth by institutions, colleges, departments, specific programs, or individual instructors. 

9. Reporting Ethical Concerns:

If students encounter ethical concerns related to the use of GAI Chatbots, they should report these concerns to their instructors or academic institutions for appropriate guidance and resolution.

10. Accountability and Consequences:

Students are accountable for their adherence to this policy. Violations of these ethical guidelines may lead to academic penalties, including failing grades or disciplinary actions.

11. Review and Updates:

This policy should be reviewed periodically to ensure its relevance and compliance with evolving educational standards and technological advancements.

This policy was generated with the help of ChatGPT and can be used and distributed freely under the CC BY license.

Ethical and Legal Risks of Using Generative AI in Academic Research

By Dr. Vlad Krotov

Academic research has been transformed by the advent of Generative Artificial Intelligence (GAI). AI continues to expand the limits of what is possible for scholars as they utilize its capabilities to enhance their intellectual endeavors.  

Using GAI in academic research has its ethical and legal risks, however. There is still a large gray area when it comes to the legality and ethics of using GAI in academic research, which researchers, legal experts, and the general public need to clarify. To navigate these uncharted waters, the academic community must tread cautiously.

This article examines the ethical and legal risks associated with the use of GAI in academic research. Questions surrounding authorship, plagiarism, bias, transparency, and value of academic research dominate the ethical landscape. Data privacy, intellectual property rights, and copyright law are pressing legal concerns. This article explores these ethical and legal issues in more detail. 

This article is not intended to provide legal advice or to serve as a final judgment regarding what is illegal or unethical regarding GAI use. A licensed legal professional should be consulted if you need legal advice on GAI. This article merely aims to raise awareness about the legal and ethical risks associated with academics using GAI. 

Copyright

Copyrighted materials are used by some GAI tools to train their models. Using GAI to create your text may result in you using copyrighted content without giving the original sources proper attribution. In addition, if a substantial portion of your text has been written by GAI, then this text isn’t your own original and creative work. As a result, you may not be able to claim copyright for your own work. When you submit your article to a journal, the situation becomes even more complicated. When an article is published, most journals require authors to transfer copyright to them. Using GAI may not grant you complete ownership of the text, so you cannot transfer your copyright. Additionally, some of the ideas in your text may have been borrowed from another author. As a result of all these legal implications, some journals and conference proceedings decided not to review articles that contained AI-generated text. 

Data Privacy

Research shows that Internet users trust search engines like Google and Bing with their most intimate thoughts and intentions (for example, when researching a disease or finding information about someone they know). As part of their “prompts”, academics can submit confidential or copyrighted data to GAI tools. Generally, GAI tools outline how they use user data in their privacy policies, stating that they store user information for a specific period of time. If this is the case, then you may be handling confidential or copyrighted data in an unauthorized manner by using GAI tools.

Plagiarism

The fact that some GAI tools rely on copyrighted text to train their models is exacerbated by the fact that these GAI tools are not very meticulous about citing their sources. GAI obtains most of its “knowledge” from various Web sources. It is possible that you are using someone’s data or ideas without giving them proper attribution when you use text generated by GAI in your own work. In academia, this is considered to be a weak form of plagiarism. 

Authorship

It can be questioned whether you are the author of your work if substantial portions of it were written by GAI. It is possible to compare your current writing samples, generated by GAI, with your previous work using authorship attribution tools. Your doctoral degree and tenure can be revoked if it can be proven that you did not write much of your dissertation or academic articles used in your tenure portfolio. 

Quality

A number of GAI literature review tools are meticulous when it comes to citing sources. However, it is unclear how these sources are selected for generating text. For example, when “writing” a literature review, GAI tools often omit most of the “seminal papers” – they probably use whatever papers are available. Sometimes, GAI produces misleading or incorrect text (e.g., citing papers that do not exist or providing incorrect factual information). These issues in your work can easily be detected by a human journal reviewer who is knowledgeable about your topic. Your reputation as an expert in your field may be damaged by all these quality issues in your work. 

Bias and Discrimination

Humans are still superior to AI tools when it comes to emotional intelligence. Some GAI tools may be incapable of addressing sensitive topics appropriately. When dealing with sensitive topics, humans possess a degree of emotional and social intelligence that makes them more cautious and responsible. There are also mechanisms built into GAI tools to ensure sensitive topics are handled appropriately (for example, providing a lengthy disclaimer that the question is highly controversial and may have a range of strong opinions). In spite of this, GAI tools are not sufficiently intelligent to fully understand why their responses may upset some people. As a result, some of this “insensitive” text containing implicit or explicit discrimination or bias can find its way into your own papers and books. 

Academic Goodwill

In the era of GAI, school teachers and college instructors are increasingly edgy and suspicious of students’ work. In a few seconds, GAI can generate an essay or a detailed case study. AI-detection tools will not be able to detect plagiarism if the text is paraphrased and edited. The academic community will also become less trusting when evaluating someone’s work. When reading somebody’s article, reviewers and editors will keep wondering: “did this person write this text?”, “am I reading creative, original ideas or is it a useless article generated by AI for the purposes of getting another ‘hit’ required for tenure or promotion? In contrast to what most people think, AI text detection tools are more reliable and valid. Despite this, these tools can still lead to false accusations against authors based on text analysis. All these issues are likely to strain the relationships among academics, especially when a researcher is not transparent about his or her use of GAI.

Reputation of Academic Research

The GAI can further damage the reputation of academic research among the public. There is already criticism that academic research is “useless” in many fields. By using GAI, academic papers that merely reshuffle things around and don’t offer anything new are becoming increasingly easy to generate. Some of the most popular academic journals, which are already “flooded” with submissions, are likely to receive even more submissions in the future. This will undermine their article processing capabilities and make it harder for researchers to find “gems” in the ocean of submissions. Moreover, AI-based referencing tools make it easier to generate long lists of references and citations – something some academics view as “citations spam”.  This further undermines the reputation of academic research

Conclusion

To mitigate these ethical and legal risks, researchers must be vigilant in upholding existing ethical standards towards academic research, complying with existing legal frameworks related to privacy and copyright, ensuring transparency of their use of GAI, and critically evaluating the outputs of GAI models. Open dialogue among the members of the academic community as well as clear guidelines can help the academic community navigate the complex landscape of GAI-powered research while preserving the integrity and respect of scholarly work.

Three Strategies for GAI-Proofing Your Course

By Dr. Vlad Krotov

Generative AI has hit business educators like a freight train. In just a few months after its launch  in 2022, ChatGPT had acquired 100 million users; 200 million users are predicted by 2023. Following the suit, Google has released its own conversational chatbot, Google Bard this year. Google Bard is powered by the same technology as Google’s search engine, so, unlike ChatGPT, it seems to be more aware of recent news and developments.

Shortly after the release of ChaGPT by OpenAI, several professors from top business schools announced that ChatGPT was able to pass their exams. While some business school professors still act as if ChatGPT doesn’t exist, a growing number of educators believe that Generative AI is a disruptive technology that will quickly and permanently alter the century-old rules and pedagogical approaches in business education. If this is true, then educational institutions must introduce changes and create policies to ensure that students use this new, disruptive technology in a way that does not impede their learning.

Nowadays, most students know what ChatGPT is and how to use it for completing homework assignments. How to mitigate the academic integrity issues associated with the use of GAI by students seems to be of the utmost importance to business schools, since academic integrity is important for quality of business education and is a formal requirement of all major international accreditation bodies, such as AACSB. In this article, I outline three simple strategies that business educators can use to mitigate academic integrity issues caused by GAI use. I also discuss each of these strategies’ pros and cons. 

Punitive Strategy

Despite the rapid advancements in GAI, Many educators choose to teach their courses “as is”. Some specify in their syllabi that the use of Generative AI for completing assignments is prohibited and punishable under the school’s Academic Integrity Policy. AI-detection tools, such as ZeroGPT, are used to monitor student submissions for AI-generated content. 

Pros

    • Trusting students to make ethical choices and punishing those who do not is an old, simple, and, perhaps,  wise approach for ensuring academic integrity. When students want to cheat, they will find a way to do so – by using GAI, hiring someone to do their projects, or in some other way. It is important for educational institutions to have an admissions process that screens out students who are likely to cheat in the first place. Instructors should be able to trust most students and not act as investigators and prosecutors at all times. If a cheating student is caught, the punishment should be severe enough to deter others from even considering unethical behavior.  
    • A minimal amount of effort on the part of faculty and the business school is required with this approach
    • A business school may want to take this approach in the short term if they want to “wait and see” what happens with Generative AI in business education before making any important decisions or investments. 

Cons

    • AI detection tools are often ineffective at detecting AI-generated text, even though this issue is not as serious as some educators believe. False-positives are also quite common. It is possible for students to revise AI-generated text to make it unlikely that it will be flagged by such software as ZeroGPT. Furthermore, proofreading tools such as Grammarly and WordTune can produce false positives as well. 
    • GAI is here to stay, most likely. It may not be wise to prohibit students from using GAI tools, since these tools may soon become essential in the business world. 
    • In the near future, business schools will probably discover new ways to improve student learning by implementing GAI. Those business schools that do not adopt GAI for teaching and learning may soon lag behind those that do. 

Flipped Classroom Strategy

It is possible to “flip” a class so that most learning and important assessments take place in a physical classroom, in front of the instructor, and with very little use of computers. For example, all exams can be administered face-to-face. Important learning exercises can be done in class as well. Major projects, while carried out outside of the class, should be presented and defended in class as well. 

With this approach, the instructor can offer students help and make sure they are the ones doing the assignments. An instructor can ask individual students or student groups to demonstrate and explain progress on their project work in class every week. The grading weight devoted to attendance and participation can be increased as well, encouraging students to attend face-to-face classes. Students are free to use GAI tools outside of the classroom in any way they see fit (e.g. to prepare for a particular class session), but they should be able to demonstrate their competence face-to-face.

Pros

    • Performing teaching and assessment face-to-face can be quite effective for attaining course learning objectives. 
    • It’s much easier for an instructor to detect cheating when most of the work is performed in front of him or her.

Cons

    • While it’s possible to ask online students to take major exams or defend major term projects on-campus, this strategy is obviously not well-suited for asynchronous online courses. 
    • May not be appropriate for large classes, since this approach requires individual attention to every team and, sometimes, every student. 

Integration Strategy

Instead of excluding GAI from the classroom, an instructor may choose to embrace the technology in a way that actually assists teaching and learning. This is probably one of the most effective yet difficult approaches. GAI is still a new technology. Many educators lack a solid understanding of how to use this technology ethically and productively. 

What’s clear though, is that this approach may require a radical redesign of each course’s pedagogy. GAI can be used by students to answer basic questions at the “understanding” level of Bloom’s Taxonomy, for example. In fact, students may be instructed to ask ChatGPT or Google Bard questions in relation to the subject matter of the course and then to read and evaluate the responses as a part of an assignment. Thus, the point of the assignment is to interact with GAI and not to provide “correct answers”. 

When it comes to higher-order cognitive skills, an instructor may ask very complex and context-specific questions in a format not supported by major GAI tools in order to decrease the likelihood that GAI will be used to complete these assignments. 

Ideally, these assignments should be in the form where GAI still lacks capability. For example, instead of just asking to analyze a case study, the instructor can ask students to create flowcharts or UML Activity Diagrams based on the case. Alternatively, students can be asked to record videos with their analyses and post them to YouTube. Thus, even if students ask GAI for assistance, the final product will largely be their own work. 

Also, while GAI tools such as ChatGPT and Google Bard are becoming increasingly knowledgeable in many topics and increasingly capable of performing very complex cognitive tasks, they often lack knowledge and understanding of very narrow and specific contexts. For example, an instructor can ask complex questions in relation to specific local individuals or organizations that may be known only to students and the instructor. For example, let’s say there’s a small, local software company where the university is located. The instructor can talk about this company in class and then asks students to come up with strategies that are suitable for this company given its unique local context

Regardless of which approach is chosen for assessing higher-order cognitive skills, the instructor can quickly run his or her questions and assignments through ChatGPT or Google Bard to see what kind of responses students are likely to get when they ask GAI for help. If GAI can answer these questions with ease, then additional context or complexity needs to be added to the question. ALternatively, the format of the assignment can be changed (e.g. from an open-ended response to a visual diagram). 

Students can rephrase these specific questions as generic ones and submit them to ChatGPT. But generating a quality response will still require analyzing and accommodating the local context (and this is where most of the learning will occur) or putting the responses in a format that still entails some learning. The instructor can deduct points if the local context is not properly accounted for or the format instructions are not followed. A grading rubric can be created that evaluates students’ work based on the extent to which a solution is contextualized for a specific company or individual and the extent to which all the directions were followed. If necessary, an oral defense can be scheduled so that students can demonstrate their mastery of the material. 

Pros

    • Since GAI is likely to become a permanent fixture in business, this approach will make students more prepared for the era of GAI
    • With this approach, the instructor can automate some of the basic tutoring tasks and focus on developing higher order cognitive skills among students via complex, contextualized, and innovative assessments. 

Cons

    • This approach can be time consuming, since it requires quite a bit of thinking and curriculum revision
    • Many educators may lack time, expertise, or motivation for integrating GIA into their curriculum in a way that is ethical and conducive to student learning. 

In conclusion, it can be said that Generative AI is very likely to become a permanent fixture in business education. Individual educators and business schools will have no choice but to adapt to this new, disruptive technology and find ways to accommodate in an ethical and productive fashion. The list of strategies for accommodating GAI in the classroom provided here is not perfect or exhaustive. What’s important though is that every educator and business school should have a strategy in relation to GAI, or they will quickly find themselves in a disadvantaged situation. Having a strategy in relation to GAI is better than having no strategy at all. 

Student academic integrity and the admission process

By Dr. Vlad Krotov

Academic integrity and the admission process in business schools.

Students must adhere to the highest academic integrity standards in order to receive a quality business education. All major accreditation agencies for business schools place a great deal of emphasis on ethics. For example, ethics and integrity is one of the main guiding principles of AACSB – the most prestigious accreditation for business schools worldwide. 

Despite that, some business schools treat ethics and academic integrity among students as an afterthought. Most business schools have student conduct rules and academic integrity policies, but these important instruments are not put to action until it’s too late. Academic integrity issues are often dealt with when they arise (usually, via a nerve-wrecking investigation process and subsequent severe punishment). This punitive “inspection approach” is time consuming, nerve-wrecking, and often unfair to students who are not fully aware of all rules and nuances of academic integrity. 

Instead of relying on this passive approach to enforcing academic integrity, business schools should focus on proactively preventing academic integrity issues from arising. This proactive “prevention approach” should start with the admission process. Quite often business schools admit students who are not fully aware of ethical standards in relation to academic work, are not prepared academically (and, thus, are likely to “cut corners”), or simply lack personal integrity for making ethically sound judgements consistently. A business school should do its best to inform students about their standard in relation to academic integrity and reject those applicants who are not likely to adhere to these standards. 

A business school can incorporate the following approaches into its admission process to minimize chances of academic integrity incidents among students:

Clearly communicate expectations during the admission process.  In application materials, websites, and admission communications, clearly articulate the school’s commitment to academic integrity. Include a code of ethics or integrity statement that applicants must acknowledge and agree to.

Conduct admission interviews. Interview all the applicants, even if their application materials look perfect. Don’t over rely on transcripts and standardized test scores to assess a candidate’s technical competency. Ask technical questions in relation to the courses that the candidate has taken to make sure that the candidate’s GPA and standardized test scores accurately reflect his or her abilities. Ask ethical dilemma questions during interviews to gauge an applicant’s ethical decision-making skills.

Require integrity essays or statements. Ask applicants to write essays or statements about their understanding of academic integrity and their commitment to upholding it. Use these essays as a basis for evaluating an applicant’s character and values.

Check references. Contact references provided by applicants to inquire about their character and adherence to ethical standards. This can provide valuable insights into an applicant’s integrity. If the person who wrote a reference cannot say anything specific about the applicant, this is a big “red flag”. It could be that other application materials cannot be trusted either. 

Review personal statements. Scrutinize personal statements for any signs of academic misconduct or unethical behavior. Ask questions about personal statements during interviews. A personal statement that is plagiarized, inaccurate, or written by someone else should warrant rejection of the application. 

Orientation and Training. Offer orientation sessions on academic integrity policies and procedures as part of the onboarding process for admitted students. Include training on proper citation, avoiding plagiarism, and ethical decision-making in an academic context.

Faculty involvement. Involve faculty members in the admission process, especially in interviews and reviewing essays or statements. Faculty can provide valuable input on an applicant’s potential for ethical behavior and spot “early warning signs” of a potentially problematic candidate.

By incorporating these practices into the admission process, business schools can set clear expectations for ethical behavior among students from the outset and send a strong message that academic integrity is a core value of the business school. Moreover, candidates who are likely to commit academic integrity violations due to lack of preparedness or poor personal choices can be eliminated from the program. This proactive and preventive approach helps create a culture of integrity that fosters quality of education and benefits all the stakeholders.