An Introduction to AI Policies for Everyone Starting From Below Level Zero
In April 2024, the MLA-CCCC Joint Task Force on AI and Writing published Working Paper 2 Generative AI and Policy Development: Guidance from the MLA-CCCC Task Force. This publication coincided nicely with the task force presentation on this paper at the Annual Convention on College Composition and Communication in Spokane, Washington.
We had a packed room, so packed we had to ask the conference organizers to move us to a larger room. Even then there was a lot of standing and floor-sitting. After providing an overview of WP2, a lively Q&A provided us with a sense of how colleagues were thinking about and grappling with GenAI policy at their institutions a year into the “GenAI Age.” Faculty were engaged and striving to make informed decisions (no heads in the sand at this presentation), but many were feeling overwhelmed. One memorable comment came from an esteemed and experienced colleague standing in the back: “All of this makes sense, and I appreciate the guidance you’re providing.” We all could sense the “but” coming. “But I feel like some of us aren’t at the level we need to be to make the best use of this [working paper]. What’s the level below zero? Some of us are there. What advice can you give us?”
In this blog post we summarize Working Paper 2’s key sections and then explain the underlying issues that animate this paper. This final section represents an introduction to AI policies for faculty and administrators working below level zero needing to know “the why” for our suggestions about certain AI and academic integrity policies in Working Paper 2. We also understand that, unlike Working Paper 1, the second publication has heavier content, which reflects how policy development requires careful thought and possible constant revisions to prevent harm to students and the mission of higher education
Summary of Working Paper 2
Working Paper 2 describes important considerations for setting policies for GAI use in instructional settings across four sections.
Section 1: Adopting Tiered Policy Language
In Section One, we advocate for consistent policies for appropriate and inappropriate use of GenAI across higher education’s hierarchy. University administrators, such as chancellors and deans, establish broad guidelines for GenAI use that the department must follow as they create policies for classrooms. At the same time, individual instructors maintain autonomy by setting AI policies in their courses according to disciplinary knowledge and assignment types but in alignment with guidelines from the department and the administration. Students need a consistent message and instructional philosophy about AI that tells them what literacy practice they should showcase.
Section 2: Principles and Process Considerations for Implementing GAI Policies
In Section Two, we outline three essential principles that can serve as torchlights to assist institutional-level, program- or department-level, and individual-class-level approaches to developing a tiered policy language. The three principles include the following:
- Policies must keep academic integrity, learning outcomes, and the teacher-student relationship at their core.
- GAI policies should reduce harm (to students, to academic integrity, and to the educational mission) while keeping educational development at the center. Policies should support the development of critical AI literacy rather than resort to blanket restrictions on access.
- Tools for detection and authorship verification in GAI use should be used with caution and discernment or not at all.
While Section One argues for a collaborative approach to policy development. Section Two explains how to do this work.
Section 3: Policies for Faculty Use of GAI
Section Three shifts guidance from students to how faculty use generative AI in research and teaching. This section also exemplifies a point made in Section One: students need consistent policies on GAI use and faculty must be role models themselves. Faculty, just like their students, must critically evaluate how their use of GAI is impacting the growth and learning of their students. We include guidance on policies around automated feedback, and deploying GenAI tools to develop course materials.
Section 4: Considering GAI’s Impact on Key Policy Areas
Section Four outlines how GAI policies will impact key areas and students in higher education, including multilingual students, conceptions of literacy, the writing process, surveillance capitalism, intellectual property rights, and what counts as “good” writing. This section further highlights that every policy decision signals an ideology about each of these areas.
Underlying Issues That Animate the Need for Carefully Aligned Policies
Working Paper 2 addresses multiple underlying issues that could arise from uncritical policy development.
- Soon after ChatGPT’s release, many educators eagerly grasped for so-called AI detectors from companies like TurnItIn, Grammarly, and OpenAI (which has since shut down their AI detector program). Studies show AI detectors sometimes wrongly labeled human–crafted writing as AI, especially content written by multilingual students. In one famous case, a student was punished with a one-year academic probation when an AI detector flagged punctuation marks that Grammarly had corrected in the student’s paper.
- Unadvised responses to inappropriate use of AI, such as refusing to allow, acknowledge, or teach about AI and assigning handwritten-only assignments or in-class writing with Kelly Blue Books. We agree with Jonathan Alexander, that students have several rights in education, especially “the right to be exposed to and learn about various tools, digital and otherwise, that enable writing and communication.” Students must develop critical literacies with and about AI platforms. While batting AI and resorting exclusively to handwritten assignments and blue books may assuage educators’ anxieties and fears about integrity, a refusal to engage with AI may prevent students developing critical AI literacies. This approach leaves them unprepared for an AI-infused world.
- On the other end of spectrum, policy should provide students clear parameters and principles for ethical GAI use. An “anything goes” approach could mean students miss out on essential learning experiences. It may be just as harmful as trying to shut down all use through policing.
- Students who do not get clear and principle-driven guidance on GAI use cannot develop the critical AI literacy skills they’ll need in school and beyond. While policy does not need to be identical from class to class or even assignment to assignment, students who aren’t provided definitions and models of ethical GAI use may develop uncritical, harmful habits on their own in response to work and academic pressures.
We welcome comments and questions about AI policy development below!