Initial Guidance for Evaluating the Use of AI in Scholarship and Creativity
In response to community needs, the CCCC and MLA‘s AI and Writing Task Force has participated in conversations about the roles of AI and writing when considering the scholarship and creative activities undertaken by faculty. These might include grant applications/proposals, research articles, conference presentations, creative works, and other kinds of professional writing.
As a first effort, the task force is sharing a set of preliminary guidelines and inviting community feedback. The aim is to offer provisional guidance for evaluating the use of AI in Scholarship and Creativity, including basic standards for the ethical use of these technologies. We have drafted these with two audiences in mind: for scholars who are preparing materials that are subject to peer review by members of the scholarly community, and for reviewers who are looking for guidance about how to approach submissions that have used AI tools as part of the process.
This work is necessarily evolving and, as a topic we seek to explore more fully in a future working paper, we invite feedback from the community. Our goal is to use the language we have provided here as a foundation for an upcoming working paper that can offer a fuller treatment of this topic; we welcome feedback from readers about the issues, questions, and concerns that you are grappling with in relation to the topic, and that might productively move our work forward.
4C/MLA Task Force on AI and Writing
As AI systems get implemented into writing technologies and potentially become ubiquitous, their use will become increasingly integrated into the writing and research process, becoming harder to detect or disclose. We can see how similar technologies– such as spell and grammar checkers, autocorrect, autocomplete, and predictive text generation– have already been integrated into word processors and text editors and their use has become so naturalized as to be considered non-substantive and therefore not necessary to disclose. However, AI integration into scholarly writing can be more substantive. It is therefore necessary to offer some guidelines to assess scholarly and creative writing that uses AI in the writing and research process.
A creative or scholarly work that makes use of AI in purposeful, responsible ways has the following characteristics:
- Transparency – The author discloses the use of AI in their work, describing the specific way(s) in which AI was used and citing the system(s) used, dates of use, and where relevant how they were used (such as prompts used) in their documentation.
- Accuracy – Because AI systems are prone to inaccuracy, fabrication, bias, and attribution problems, the author is responsible for the accuracy of the information and citations in work. They revise or delete any fabrication or “hallucination” generated by an AI system before submitting it for publication. They also mitigate bias that may be introduced by the AI system.
- Source attribution – The author does due diligence to check for possible plagiarism in the AI output and to check for ideas that should be attributed to particular scholars.
- Responsibility – The author maintains responsibility for the published writing and ideas circulated within that writing. If the resulting work is offensive, inaccurate, or problematic they are responsible for reviewing and addressing these concerns as a part of their writing process. AI text generators are not (and cannot be) responsible for the revision process of the writer or for preventing problematic writing from occurring.
- Originality – The writing advances the author’s ideas and makes a contribution to the field. Even when AI is used for ideation, it is up to the author to recognize when such ideas make legitimate contributions to their argument or field.
- Quality – When submitting AI-assisted writing for publication the author is advancing work that strives to meet the highest quality standards in the field.
It is important that, when evaluating work that has used AI in its writing process, that reviewers assess the work on its own merits, regardless of what they may know about the use of AI. Prejudice and backlash against authors who use these technologies is unacceptable, especially when they are following the guidelines described above, largely because it would discourage transparency about the use of AI in the writing process.
We invite you to use the “comment” function below on the blog to share your initial thoughts on this draft. We provide, as well, some initial guidance that is being issued by a variety of agencies and universities as consideration for how MLA and CCCC might move forward with guidance that reflects the values and principles of our related fields.
- Department of Defense DOD Releases AI Adoption Strategy
- Department of Energy – National Labs AI for Science, Energy, and Security Report
- Department of Education Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations
- Harvard University, Initial guidelines for using ChatGPT and other generative AI tools at Harvard
- Higher Learning Commission: Trend Alert: Artificial Intelligence Tools | July 2023 | Leaflet | News-Reports
- National Institute of Health
- National Science Foundation – merit review and proposal preparation process
- Science Change to policy on the use of generative AI and large language models | Science