Altmetrics

By Dr. Vlad Krotov

What is Altmetrics?

Altmetrics, short for “alternative metrics,” are non-traditional measures that assess the reach and influence of academic research by assessing online attention and engagement that the research produces. Unlike traditional scholarly metrics like number of citations, h-index, and journal impact factor, altmetrics capture the broader, real-time impact of research on various platforms, such as social media (e.g., Twitter, Facebook, LinkedIn), news outlets, blogs, policy documents, online repositories (e.g., GitHub, Figshare), and various academic platforms (e.g. Mendeley, ResearchGate). 

Altmetrics provide a broader and more diverse perspective on the impact of research, particularly its societal, professional, and educational relevance. Moreover, altmetrics can help a business school align its research strategy with its mission and AACSB Standard 8, which deals with the impact produced by a business school’s portfolio of intellectual contributions. 

How Altmetrics Measure Research Impact

In order to measure research engagement and impact online, altmetrics may use the following metrics:

    • Mentions in Social Media: Measuring how often a study is shared or discussed on platforms like Twitter or Reddit.
    • Policy Citations: Tracking references in government and organizational policy documents.
    • Media Coverage: Counting mentions in mainstream and specialized news outlets.
    • Public Usage: Analyzing usage and engagement in non-academic contexts, such as clinical practice guidelines, teaching resources, or public discussions.
    • Online Accessibility: Assessing the frequency with which research outputs are viewed, downloaded, or interacted with on various academic and non-academic platforms.

Altmetrics tools like Altmetric.com and PlumX aggregate and visualize these data to help researchers and institutions understand their research’s digital footprint and societal reach.

Relation to AACSB Standard 8 – Impact of Scholarship

AACSB Standard 8 emphasizes that the impact of scholarship is a key criterion for assessing academic excellence of business schools.  The standard requires schools to demonstrate that their faculty’s research and intellectual contributions are relevant, impactful, and aligned with the school’s mission. Altmetrics align with AACSB Standard 8 in several ways:

    • Broadening Impact Assessment: Altmetrics capture the societal and practical impact of research, showcasing its value beyond traditional academic measures like citations. This broader scope supports AACSB’s emphasis on demonstrating tangible benefits to businesses, communities, and broader society.
    • Real-Time Feedback: Unlike traditional metrics that take years to materialize, altmetrics can provide real-time data on how research is received, discussed, and applied. This helps schools quickly assess the relevance and effectiveness of their scholarly output.
    • Demonstrating Relevance: Altmetrics data can highlight how faculty research aligns with industry needs, public policy, or community issues, reinforcing AACSB’s focus on relevance to practice and societal engagement.
    • Strategic Insights for Schools: Schools can use altmetrics to align research strategies with their mission, identifying areas where faculty scholarship has (or should have) significant societal or economic impact.
    • Showcasing Stakeholder Engagement: By reflecting public, media, and policy engagement, altmetrics demonstrate how research contributes to broader conversations and decision-making processes, a key aspect of scholarship impact under AACSB standards.

Conclusion

Altmetrics provide a valuable tool for demonstrating and measuring the broader impact of scholarship in ways that align with AACSB Standard 8. By showcasing the societal, policy, and professional influence of research, altmetrics help institutions fulfill the AACSB’s requirement that accredited business schools should produce scholarship that matters to a wide range of stakeholders that a business school serves. 

Ethical and Legal Risks of Using Generative AI in Academic Research

By Dr. Vlad Krotov

Academic research has been transformed by the advent of Generative Artificial Intelligence (GAI). AI continues to expand the limits of what is possible for scholars as they utilize its capabilities to enhance their intellectual endeavors.  

Using GAI in academic research has its ethical and legal risks, however. There is still a large gray area when it comes to the legality and ethics of using GAI in academic research, which researchers, legal experts, and the general public need to clarify. To navigate these uncharted waters, the academic community must tread cautiously.

This article examines the ethical and legal risks associated with the use of GAI in academic research. Questions surrounding authorship, plagiarism, bias, transparency, and value of academic research dominate the ethical landscape. Data privacy, intellectual property rights, and copyright law are pressing legal concerns. This article explores these ethical and legal issues in more detail. 

This article is not intended to provide legal advice or to serve as a final judgment regarding what is illegal or unethical regarding GAI use. A licensed legal professional should be consulted if you need legal advice on GAI. This article merely aims to raise awareness about the legal and ethical risks associated with academics using GAI. 

Copyright

Copyrighted materials are used by some GAI tools to train their models. Using GAI to create your text may result in you using copyrighted content without giving the original sources proper attribution. In addition, if a substantial portion of your text has been written by GAI, then this text isn’t your own original and creative work. As a result, you may not be able to claim copyright for your own work. When you submit your article to a journal, the situation becomes even more complicated. When an article is published, most journals require authors to transfer copyright to them. Using GAI may not grant you complete ownership of the text, so you cannot transfer your copyright. Additionally, some of the ideas in your text may have been borrowed from another author. As a result of all these legal implications, some journals and conference proceedings decided not to review articles that contained AI-generated text. 

Data Privacy

Research shows that Internet users trust search engines like Google and Bing with their most intimate thoughts and intentions (for example, when researching a disease or finding information about someone they know). As part of their “prompts”, academics can submit confidential or copyrighted data to GAI tools. Generally, GAI tools outline how they use user data in their privacy policies, stating that they store user information for a specific period of time. If this is the case, then you may be handling confidential or copyrighted data in an unauthorized manner by using GAI tools.

Plagiarism

The fact that some GAI tools rely on copyrighted text to train their models is exacerbated by the fact that these GAI tools are not very meticulous about citing their sources. GAI obtains most of its “knowledge” from various Web sources. It is possible that you are using someone’s data or ideas without giving them proper attribution when you use text generated by GAI in your own work. In academia, this is considered to be a weak form of plagiarism. 

Authorship

It can be questioned whether you are the author of your work if substantial portions of it were written by GAI. It is possible to compare your current writing samples, generated by GAI, with your previous work using authorship attribution tools. Your doctoral degree and tenure can be revoked if it can be proven that you did not write much of your dissertation or academic articles used in your tenure portfolio. 

Quality

A number of GAI literature review tools are meticulous when it comes to citing sources. However, it is unclear how these sources are selected for generating text. For example, when “writing” a literature review, GAI tools often omit most of the “seminal papers” – they probably use whatever papers are available. Sometimes, GAI produces misleading or incorrect text (e.g., citing papers that do not exist or providing incorrect factual information). These issues in your work can easily be detected by a human journal reviewer who is knowledgeable about your topic. Your reputation as an expert in your field may be damaged by all these quality issues in your work. 

Bias and Discrimination

Humans are still superior to AI tools when it comes to emotional intelligence. Some GAI tools may be incapable of addressing sensitive topics appropriately. When dealing with sensitive topics, humans possess a degree of emotional and social intelligence that makes them more cautious and responsible. There are also mechanisms built into GAI tools to ensure sensitive topics are handled appropriately (for example, providing a lengthy disclaimer that the question is highly controversial and may have a range of strong opinions). In spite of this, GAI tools are not sufficiently intelligent to fully understand why their responses may upset some people. As a result, some of this “insensitive” text containing implicit or explicit discrimination or bias can find its way into your own papers and books. 

Academic Goodwill

In the era of GAI, school teachers and college instructors are increasingly edgy and suspicious of students’ work. In a few seconds, GAI can generate an essay or a detailed case study. AI-detection tools will not be able to detect plagiarism if the text is paraphrased and edited. The academic community will also become less trusting when evaluating someone’s work. When reading somebody’s article, reviewers and editors will keep wondering: “did this person write this text?”, “am I reading creative, original ideas or is it a useless article generated by AI for the purposes of getting another ‘hit’ required for tenure or promotion? In contrast to what most people think, AI text detection tools are more reliable and valid. Despite this, these tools can still lead to false accusations against authors based on text analysis. All these issues are likely to strain the relationships among academics, especially when a researcher is not transparent about his or her use of GAI.

Reputation of Academic Research

The GAI can further damage the reputation of academic research among the public. There is already criticism that academic research is “useless” in many fields. By using GAI, academic papers that merely reshuffle things around and don’t offer anything new are becoming increasingly easy to generate. Some of the most popular academic journals, which are already “flooded” with submissions, are likely to receive even more submissions in the future. This will undermine their article processing capabilities and make it harder for researchers to find “gems” in the ocean of submissions. Moreover, AI-based referencing tools make it easier to generate long lists of references and citations – something some academics view as “citations spam”.  This further undermines the reputation of academic research

Conclusion

To mitigate these ethical and legal risks, researchers must be vigilant in upholding existing ethical standards towards academic research, complying with existing legal frameworks related to privacy and copyright, ensuring transparency of their use of GAI, and critically evaluating the outputs of GAI models. Open dialogue among the members of the academic community as well as clear guidelines can help the academic community navigate the complex landscape of GAI-powered research while preserving the integrity and respect of scholarly work.