MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations

This working paper discusses the risks and benefits of generative AI for teachers and students in writing, literature, and language programs and makes principle-driven recommendations for how educators, administrators, and policy makers can work together to develop ethical, mission-driven policies and support broad development of critical AI literacy.

We invite you to comment on the working paper to help inform the task force’s ongoing activities. We intend for this to be the first of subsequent working papers that will be developed as the task force receives feedback and identifies priorities in the coming year.

46 Responses

  1. gb
    gb · July 10, 2023 at 17:23:03 · →

    I am disappointed that the task force has not endeavored to consider the POLITICAL (as opposed to “ethical”) dimensions of AI’s implications within the teaching and study of language, literature, and culture. Will this concern be foregrounded in a future working paper?

    1. Holly J Hassel
      Holly J Hassel · July 12, 2023 at 15:14:29 · →

      Thank you for this suggestion–as the TF manages competing priorities, this is certainly in the mix. Given the rapidly shifting landscape of AI/LLMs, we started by providing an introduction to the issues because so many educators are still trying to catch up with what is happening, even as things change even faster. Our intention was to start with a resource that will help teachers and institutions a) understand the issues and the technology, b) think carefully through their own classroom practices and how they may find it appropriate to respond to LLMs as they evolve, c) identify key issues for policy development that are taking place (or are starting to take place) at their home institutions.

  2. Dr Paul McAfee
    Dr Paul McAfee · July 11, 2023 at 11:57:03 · →

    Thank you for producing and sharing this white paper.

  3. Ann M Bomberger
    Ann M Bomberger · July 11, 2023 at 16:56:53 · →

    Thank you for publishing such a thoughtful work.

    Perhaps this is already in the works: might this paper get advertised on the front pages of MLA and CCCC website as well as the webinar for July 26th?
    Best wishes,
    Ann

    1. Holly J Hassel
      Holly J Hassel · July 12, 2023 at 15:08:08 · →

      I believe it has been circulated by both organizations and is listed under webinars on the MLA site.

  4. Diane Jefferson
    Diane Jefferson · July 11, 2023 at 19:18:02 · →

    Thank you! This paper provides some specific considerations for institutions and Writing departments as they work towards developing their own responses to AI.

    1. Holly J Hassel
      Holly J Hassel · July 12, 2023 at 15:08:35 · →

      Thanks! we hope it will be useful as a starting point for framing conversations at home institutions.

  5. Kim Brian Lovejoy
    Kim Brian Lovejoy · July 11, 2023 at 20:08:44 · →

    Thank you for this working draft, an important beginning to the many questions it raises for teachers, students, administrators, and others.

  6. Mark Minster
    Mark Minster · July 11, 2023 at 21:55:07 · →

    I have three thoughts. First, thanks to the task force for reaching into a hornet’s nest.

    (1) I’m trying to reconcile three statements in this paper: (a) the factual statement that LLM technologies “mimic.. writing” and “do not… ‘think’” (6); (b) the unsupported claim that these technologies “augment [note: not “may augment,” in a document full of “may” statements] the drafting and revising processes of writers for a variety of purposes” (8); and (c) the recommendation that all in the profession(s) should “center… the inherent value that writing has as a mode of learning, exploration, and literacy development” (10). I’m not certain writing has “inherent value,” though I cherish it so much I’m prepared to agree. But if it does, surely that value has more to do with thinking than with the mimicry of thought?

    1. Mark Minster
      Mark Minster · July 11, 2023 at 21:56:05 · →

      (2) There’s the recommendation that we in the profession “develop policy language around AI by promoting an ethic of transparency around any use of AI text that builds on our teaching about source citation. For example, most AI generators produce a transcript of the interaction with the user, which can be used as documentation” (10). I’m trying to reconcile how this “ethic of transparency” fits with the paper’s factual statement that an LLM “cannot reliably report on which sources in its training data contributed to any given output” (6). Do we mean transparency about the lack of transparency in sources that are actually the erasure of sources?

      1. Mark Minster
        Mark Minster · July 11, 2023 at 21:59:43 · →

        (3) Most importantly, I share gb’s hope (above) that future working papers will take up more of the political and economic implications of generative AI on the teaching of writing. This first working paper is clearly trying to present cons and pros here in an even-handed way, but I worry that that’s a dangerous game. It’s also not played as well as it might be (e.g., are “risks” and “benefits” equal opposites?). For example, the Risks section warns reasonably against possible “increased linguistic injustice” and “unequal access… which may replicate societal inequities,” risks that “could hurt marginalized groups disproportionately” (7). The Benefits section drops its modal verbs in a section that sounds much more like cheerleading: generative AI “affords enormous potential benefits. It has the promise to democratize writing, allowing almost anyone… to participate in a wide range of discourse communities” (8). (What does participation in a discourse community MEAN? Mimicry?) Even when the Benefits section does use “may,” it sometimes sounds naïve: “writers who come from diverse and various linguistic and educational backgrounds may benefit… by receiving access to ‘the language of power’” (10). Even if this is true, who grants this access? How? To what ends?

        Not that I think this task force needs to have solved these questions. Just that the report (to me, I admit) demonstrates how unlevel this playing field is.

        1. Holly J Hassel
          Holly J Hassel · July 12, 2023 at 15:11:32 · →

          Thanks for your engagement with the working paper. We aimed to represent that so much about this topic right now is preliminary and shifting. We plan to gather feedback at the webinar later this month about the pressing needs facing educators, knowing that LLM technoloies are changing so quickly that responses rapidly become outdated. The task force is emphasizing careful and thoughtful addressing of the issues underpinning LLMs so that the organizations can provide guidance that is big picture and long term (or offers heuristics) rather than prescriptive.

      2. Holly J Hassel
        Holly J Hassel · July 12, 2023 at 15:09:19 · →

        Here we are referring to a process embedded within composing tools that would make transparent whether AI has been used to generate the text.

    2. Maisha
      Maisha · July 13, 2023 at 12:05:49 · →

      I agree with Mark’s points.

      I, too, am grateful the conversation has begun in earnest, but there does seem to be a “tone” to the paper that may seem to some unproductive.

  7. Kellard Townsend
    Kellard Townsend · July 12, 2023 at 18:44:16 · →

    Thanks to the committee for their initial framing of the the problem set.

    As a classroom teacher who has monitored some AP Lit Facebook posts over the last 2 years, as well as some Facebook postings from US Army-related forums, I find that for the high school teachers, items #1-Supports & #3-Best Practices on page 10 of the whitepaper are the ones we would be in need of most.

    As every district will approach exploration and training on AI in the classroom in different ways and at different paces, the publication of Supports & Best Practices first and soon (we start 1 AUG this year) would be the most beneficial to the largest audience.

    Again, thank you for all your work.

    Regards,

    Kellard Townsend

    1. Kellard Townsend
      Kellard Townsend · July 12, 2023 at 18:54:03 · →

      Task Force –

      I just scanned through the Quick Guide on your site and saw a number of linked articles about Supports & Best Practices. This is a good start!

      Might I suggest a Table with the questions across the top row and links in each column to allow a viewer to see more quickly the categories you all have created and are researching. New rows could then be added so a running list of resources is available.

      Again, thanks for all your work! Much appreciated.

      1. Sarah Z Johnson
        Sarah Z Johnson · July 13, 2023 at 13:03:58 · →

        Thank you! This is a helpful suggestion. We’ll be reviewing and updating the Quick Start Guide later this summer and we’ll consider how to make the guide formatting more reader-friendly.

  8. Beth Eyres
    Beth Eyres · July 12, 2023 at 19:54:29 · →

    Thanks to the task force for their work so far. Our department (GCC in AZ, English Dept) is planning to use it as a jumping off point for some discussions this fall.

  9. Megan C. Brown
    Megan C. Brown · July 13, 2023 at 18:55:52 · →

    I really appreciate the task force’s report, particularly the “Principles and Recommendations” section. At my institution and at others where friends of mine work, many faculty and administrators seem flustered: “How can we keep up with this? What can we do? Is the sky falling?” The report’s recommendations will be helpful in reassuring folks that thoughtful, measured responses to generative AI are (a) possible and (b) likely to improve our teaching practices in the long run.

  10. Matthew Kirschenbaum (task force member)
    Matthew Kirschenbaum (task force member) · July 15, 2023 at 13:59:06 · →

    Wrt gb’s comment about the omission of a political dimension, I think I have a sense of what they have in mind—the report, for example, does not but obliquely discuss the ties between these models and various corporate and commercial interests—but I would be interested to hear more particulars; keeping in mind too that part of the remit for this committee is to expend energy and resources on what it’s two parent orgs are uniquely positioned to contribute to the critical conversation. Thank you.

  11. Ted Underwood
    Ted Underwood · July 15, 2023 at 15:05:11 · →

    I want to thank the task force for tackling this difficult problem and providing recommendations that are generally wise and balanced. This report is a credit to the MLA.

    A couple of minor issues in your description of LLMs. You say that they spell words letter by letter, predicting the next “character.” I think this mostly isn’t true, except for rare proper names. Mostly they predict words and word-pieces (character sequences shorter than a word, but usually not just one character).

    At the end of the section you also say flatly that LLMs don’t “think” as we understand thinking. I don’t believe there’s consensus on that. By Alan Turing’s pragmatic definition of thinking, we are at least getting into a gray area. The flat assertion that (these) machines do not think forecloses many interesting philosophical questions in order to achieve—what? It will make some nervous readers feel better, but I’m not sure that’s worth the loss.

    But these are minor issues. It’s a good report.

  12. Emma Stamm
    Emma Stamm · July 15, 2023 at 17:04:36 · →

    I’m not sure if this is what Ted Underwood meant, but for what it’s worth, Alan Turing did not believe that machines were capable of thinking. This is a common misconception, just as the Turing Test is commonly mistaken to be a test for machine intelligence

    To quote Turing’s seminal 1950 essay “Computing Machinery and Intelligence:” “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

    On another (perhaps more important) note, this working paper has a strong streak of techno-optimism that isn’t adequately tempered by scholarly skepticism. To state that “LLMs can detect layouts, summarize text, extract metadata labels from unstructured text, and group similar text together to enable search” is to present specific perspectives towards the interpretive approaches. Within and across disciplines, especially in the Humanities, there’s debate on what makes text “similar” and whether inductive analysis (e.g. to “extract” categorical/metadata labels) is a broadly useful hermeneutic approach. Of course, it’s perfectly acceptable to take these approaches. But we have to remember that tech corporations have a vested interested in the delegitimization of other ways of thinking.

    Last but not least, I want to point out Janelle Shane’s recent commentary on AI-detection tools — of the sort that institutions are increasingly adopting to catch plagiarism via AI. The false positive rate of these tools is alarmingly high, especially on text written in a language that is not the writer’s first. This has clear ethical and political implications:

    https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/

  13. The Air Whisperer
    The Air Whisperer · July 16, 2023 at 10:28:00 · →

    But WHY call it ‘AI’ …?

    A significant problem with generative text models is their bogus cultural association with ‘intelligence’, both in the general sense and as a task-centred attribute.

    Supporting the use of ‘AI’ as a label perpetuates that mythology; other labels are available, and almost all of them offer a more ‘realistic’ insight into the way (for example) ‘Text Synthesizers’ work.

    After chess-playing computers began to succeed (the traditional ‘marker’ is Deep Blue v Kasparov in 1986, although the human grandmaster won a rematch the following year), headlines celebrated the emergence of thinking machines, but the scientific community instead re-evaluated HOW CHESS IS UNDERSTOOD. Rather than machines achieving a task requiring intelligence of a high quality, as previously believed, it became clear that what had been exhibited was the human’s need, in order to play chess well, to deliberately and carefully SUPPRESS intelligence in favour of a more mathematically-valid ‘world view’; in other words, a more mechanical operation than ‘thinking’.

    An education system based on text processing (Papers, Reports, Textbooks, Libraries etc) could benefit significantly from the emergence of techniques for ‘parsing’ ways in which those resources may be used or misused to convey knowledge effectively or poorly. Right now, MISINFORMATION (perhaps the OPPOSITE of ‘effective education’) is a major concern in several areas of life; continuing to call unintelligent programs ‘AI’ may seem like an insignificant example of the genre, but it’s a habit that may (like the proverbial finger-sized hole in a dam) ‘leverage’ a disproportionate amount of consequent misunderstanding.

    Reports like yours can plug that hole.

    Explicitly rejecting the label AI applies an equally ‘leveraged’ POSITIVE influence on how the DIFFERENCE between intelligent and mechanical text-construction may be investigated

    – a field of study that must surely benefit the future work, and effectiveness, of academics in every discipline.

  14. Matthew Kirschenbaum (task force member)
    Matthew Kirschenbaum (task force member) · July 16, 2023 at 21:21:52 · →

    We called it AI because that is the parlance of the global societal conversation. The opening section distinguishes between generative AI and AGI and bluntly states that these models do not “think,” a claim which itself (as Ted Underwood notes above) is in fact subject to some question.

    There is no perfect vocabulary. We chose to write using the terms that would be recognizable to our readers, glossing and contextualizing them as necessary. Thank you for commenting.

  15. Nick Potkalitsky
    Nick Potkalitsky · July 18, 2023 at 00:21:52 · →

    The language at the end of paragraph three in the “History, Nomenclature, and Key Concepts” section is very difficult to parse. I am not even sure the last sentence is grammatical. What is “they” specifying. In other words, what is “unavoidable”? I press this point, because the content here seem crucial to the whole.

  16. Mark Richardson
    Mark Richardson · July 18, 2023 at 11:12:48 · →

    I had ChatGPT edit the opening paragraph of this white paper for clarity. The result: “With the increasing availability of generative artificial intelligence (AI) technologies as writing aids, the Modern Language Association and the Conference on College Composition and Communication, a chartered conference of the National Council of Teachers of English, jointly affirm our shared values as organizations dedicated to serving professional educators. We firmly believe that writing plays a vital role in the learning process, enabling the analysis and synthesis of information, the retention of knowledge, cognitive development, social connection, and active engagement in public life. From the earliest marks etched on clay to the latest word processors equipped with autocorrect, research citation, and other helpful features, writing has always been regarded as a technology and, as such, has remained open to embracing new technological advancements. Nevertheless, we harbor concerns about the potential vulnerability of writing and language learning programs, which are essential components of humanities education and education as a whole.”

  17. Mark Richardson
    Mark Richardson · July 18, 2023 at 16:07:17 · →

    Surely I’m not alone in finding the prose style here inadvertently humorous: “Some of the immediate risks posed by LLMs are to the interactional and human components of teaching, research, and student engagement. Student reading practices can become uncritically automated in ways that do not aid critical writing instruction or faculty assessment approaches. The increased use and circulation of unverified information and the lack of source transparency complicates and undermines the ethics of academic research and trust in the research process. Additionally, although using LLMs to collect and synthesize preexisting information may provide students with models of writing and analysis, such models reproduce biases through the flattening of distinctive linguistic, literary, and analytical approaches.”

  18. Ryan Siemers
    Ryan Siemers · July 19, 2023 at 22:36:37 · →

    Thank you to the members of the task force for producing this working paper. I would caution against the general tendency in writing in this area to tell readers how we should feel about the enormous disruption that LLMs represent. For example, the authors write in their recommendations section, “we urge educators to respond out of a sense of our own strengths rather than operating out of fear.” I resent the disruption caused by LLMs for the reasons that the paper mentions in “risks to teachers,” and indeed I fear the unintended consequences of this technology. In my view, our tech-bro overlords recklessly released LLMs into the world without any consideration for those consequences. To their own financial gain, they created an enormous amount of uncompensated labor for those of us in higher education. I will do this work–I have no choice–but it’s unreasonable to expect me to be happy about it.

  19. Julia Keefer
    Julia Keefer · July 20, 2023 at 15:44:51 · →

    Thank you for your impeccably-written, thoughtful overview of the pros and cons of generative AI for writing and literature. In your introduction, you left out how language can be funny, beautiful, sensuous, painful, frustrating, or exhilarating, considerations that can lead to better teaching and learning. For example, students could generate screenplays with AI, write their own stories longhand in a natural setting, read aloud and improvise their scenes, and have a good laugh comparing the two! Students could also use AI in medical humanities to learn more about diseases and disabilities, analyze case studies, and with pen and paper write a personal illness narrative with tears and laughter, injecting some blood into the bot. Once students are engaged, rhetorical modes and critical thinking can generate a mastery of knowledge and skills potentially superior to previous times if there is enough focus and perseverance to follow through. I wrote and published four novels this year without AI but may use it to quickly generate screenplays before Hollywood recycles my most original ideas!

  20. Dr. Beatrice McKinsey
    Dr. Beatrice McKinsey · July 20, 2023 at 18:45:35 · →

    Thank you for addressing the AI issue in teaching writing and literature. I look forward to your webinar.

  21. Stacey Sheriff
    Stacey Sheriff · July 20, 2023 at 19:01:17 · →

    I really appreciate that MLA and CCCCs have come together to weigh in on this topic, and I will share this first report with my colleagues. But I also agree with those above who have pointed out that the Benefits section needs more hedging and less techno-optimism.

    Some of the “benefits to writing instruction” listed are claims without explanation, caution, or exemplification that obscure the gray areas, risks, and intellectual labor inherent to trying to reap such benefits. For example, the first benefit to writing instruction listed states:

    “Students can use LLMs to help stimulate thought and develop drafts that are still the student’s own work and to overcome psychological obstacles to tackling invention and revision. When used in these ways, LLMs have the potential to act as literacy sponsors to emerging academic writers.”

    Does “develop drafts” mean copying the prompt/directions for a writing assignment into a generative tool like ChatGPT or ParagraphAI and asking it to write several hundred words in an instant? Given the many ways these tools also allow (even prompt) users to then quickly modify the result to, for instance, make the text more/less formal, include an example in paragraph three, make a reference to XYZ text in the introduction etc., would most faculty still consider such drafts “the student’s own work”? If not, how much more revision would make it so? Faculty who haven’t yet had much chance to play with these tools also might not realize these simple examples are easily possible and can be supplemented by google and TikTok searches for how to do it better.

    Or, what does it mean for teachers to “integrate LLMs into the writing process” when some of these tools (especially in their commercialized forms) claim to automate a simulated version of the writing process with features that offer to “adjust the tone,” wording, length, genre etc. of their outputs?

    Even if instructors are comfortable with all this and view the resulting AI-generated text as something productive to work with, it’s only with teachers’ thoughtful, critical labor, questions, and discussion of rhetorical options that this scenario becomes potentially rich writing education. Of course, there are individual students who will see and be willing to invest the time into playing with these tools to compare, contrast, and prompt engineer, but these tools are not designed to be literacy sponsors or promote critical thinking. Teachers are.

  22. Sandra M. Leonard
    Sandra M. Leonard · July 26, 2023 at 18:34:47 · →

    Comment on the risks section and the need to mention NTT faculty: In addition to lacking training and the expectation to participate in labor-intensive policies, there is a risk to more vulnerable faculty (adjuncts, ntt) based on admin’s views regarding AI and academic honesty. Admin may, for instance, expect faculty to somehow “prevent” students from using LLMs thus making reporting students for academic honesty a risk to their career. Similarly, admin may expect profs to “embrace” AI even when doing so may be at odds with the course objectives. I suggest adding a statement regarding academic freedom in the use of AI and application of academic honesty policies to AI use based on varying classroom policies, particular to vulnerable faculty members.

  23. Kyle Garton-Gundling
    Kyle Garton-Gundling · July 26, 2023 at 18:49:29 · →

    The working paper advises against “[making] the conditions of writing for class radically different from writing conditions students will encounter in other classes, work environments, and their personal lives” (11).
    But this suggestion seems to be in tension with an earlier point that “generative AI cannot simply be used in colleges and universities as it might be in other organizations for efficiency or other purposes” (4). How should we think about the roles of AI in the classroom compared to the possible roles of AI in workplaces or other contexts?

  24. Sandra M. Leonard
    Sandra M. Leonard · July 26, 2023 at 18:53:06 · →

    I suggest a supplemental statement that more clearly provides guidance/suggestions for what to do when unsanctioned and inappropriate use of LLMs is suspected or outright witnessed.

    You put very reasonable emphasis on avoiding surveillance due to accuracy concerns as well as the danger of promoting a hostile environment, but this does not mean that AI won’t be suspected through just knowing a student’s usual writing patterns. Some students–likely many students–will try to take advantage of the technology due to its ease and simple human nature (e.g. Ariely’s research on the ubiquity of cheating at little bit from all students, particularly if they see peers doing it and if there is little surveillance) even to the detriment of their own learning.

    We ought to have some discussion of what a “teachable moment” looks like in regards to the unsanctioned use of AI beyond just when to report a student and when not to.

  25. Maureen Cahill
    Maureen Cahill · July 27, 2023 at 09:14:01 · →

    Sandra M. Leonard, I agree. We need to have discussions about the unsanctioned uses of AI and how to use them as teachable moments. Our current teaching practices, as well as our policies, and syllabi will need to change immediately. For years, students have focused on grades more than learning. AI has quickly become the go to short cut for my students. I’m glad this discussion has started, and much of this information will inform changes in my teaching and class policies, but I still need some practical ideas on how to handle unsanctioned AI use.

  26. Stefan Bauschard
    Stefan Bauschard · July 28, 2023 at 02:58:09 · →

    Hi, I don’t teach writing but I know a lot about some of these issues and just want to make a few points.

    (1) As someone noted, you can’t say as a matter of fact that the models are incapable of reasoning, and I think it’s fair to say that many AI “experts” now believe they have super, super basic reasoning abilities (at least ChatGPT 4 does).

    While the models were just trained on next word prediction, what has been demonstrated is that the better it does next word prediction the more it demonstrates “emergent properties” that are unexpected. For example, it codes REALLY WELL, but it it was not trained to do that in *any way, and it’s not necessarily something that follows from next word prediction. So you see it start to do things that have some ingredients of intelligence even if it’s not expected to.

    Moreover, these models were also trained with RLHF and other images and games, not solely with next word prediction.

    Finally, I’ll point out that this is not the end. Object, “World models” are being developed that many believe will demonstrate much higher levels of intelligent.

    Anyhow, you can’t claim as a fact that the models are not in any way intelligent. That’s very hotly debated by the most influenttial AI scientists in the world. And I don’t think any of them believe that the models won’t eventually be intelligent.

  27. Stefan Bauschard
    Stefan Bauschard · July 28, 2023 at 03:02:58 · →

    (2) I have a variety of thoughts on the ethical issues, but I’ll simply point out here that there is no stopping this technology. Even if using AI was never taught in HS or university, everyone will use it when they go to work. I think the best stance is to use it as ethically as possible and try to shape it. Discriminatory output, for example, can be substantially reduced with prompting that accounts for it.

  28. Stefan Bauschard
    Stefan Bauschard · July 28, 2023 at 03:07:56 · →

    (3) I think there has largely been way too much focus on the current abilities of these technologies. None of these is more than a beta product to test how people use it. These tools are basically little toys compared to what will be available in a couple years and applications that build an improve on them (conch.ai, for example writes student papers).

    Generally, think about how you’d change your teaching if

    (a) The tools could write in the students own voices;
    (b) Hallucination rates were below like 2% (which is actually way lower than human hallucination rates)
    (c) All the tools were available to students at the cost of a few textbooks.

    All of that will happen within 2 years (maybe more) and we may get to a chunk of it by fall 2024.

    When that happens, how will you change your teaching?

  29. Salena Anderson
    Salena Anderson · July 28, 2023 at 04:44:35 · →

    Thank you for this paper and the related webinar _What AI Means for Teaching_. I wanted to lift up Antonio Byrd’s comments on AI as posing “a danger to our democracy” (in the last two minutes of the webinar). As others note, this risk is not yet explored in the present working paper; but its consideration is part of developing critical digital literacy. And I hope to see more discussion of the political dimensions of AI in future working papers.

    If we wish to integrate AI applications in the writing process, we must account for the fact that AI isn’t neutral (as Byrd points out, AI can perpetuate bias and spread misinformation). Without transparency on sources, we are especially vulnerable to the motives of the large corporations and governments that influence the creation and design of LLMs. Even with source transparency, models reflect the biases of their training data; and the inclusion of such bias may even be purposeful. For instance, Chris Coons, Democrat of Delaware, is quoted by the New York Times, suggesting the following: “The Chinese are creating A.I. that ‘reinforce the core values of the Chinese Communist Party and the Chinese system [….] And I’m concerned about how we promote A.I. that reinforces and strengthens open markets, open societies and democracy’” (www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html).

    If we endorse technologies with their own political agenda, whether it be a Communist agenda or one that “reinforces and strengthens open markets…,” consideration of this point is essential to building critical digital literacy. While we might say that “talking” to ChatGPT is no different from talking with a peer or writing consultant or from reviewing other sources, in all of these other cases, we are engaging with real people, and we know where the ideas are coming from. Not so with current AI. This can be dangerous if content is coming from models built by corporations and governments with no transparency. We know that social media can create echo chambers reflecting the user’s own ideologies. What about AI? How does its bias impact our idea generation and thoughts? I write about this more in an article I published with _Computers and Composition_ this spring: https://doi.org/10.1016/j.compcom.2023.102778

  30. Deborah Van Damme
    Deborah Van Damme · July 29, 2023 at 01:13:27 · →

    Thank you very much for producing and sharing this informative and helpful information!

  31. Bonnie Youngs
    Bonnie Youngs · August 9, 2023 at 12:15:17 · →

    Unfortunately, to the list by Stefan Bauschard,
    (a) The tools could write in the students own voices;
    -creativity can be adjusted by students so that the tools write closer to a ‘human voice’
    (b) Hallucination rates were below like 2% (which is actually way lower than human hallucination rates)
    -some tools hallucinate more than others (compare Bard, ChatGPT, Llama2, Bing…)
    (c) All the tools were available to students at the cost of a few textbooks.
    -actually, less, currently ChatGPT costs $20 per month

  32. Tracy Rauschkolb
    Tracy Rauschkolb · August 14, 2023 at 00:00:45 · →

    Has MLA developed a format for students to document their use of AI?

    I think this might help with the issue of plagiarism somewhat.

  33. Jody Millward
    Jody Millward · August 16, 2023 at 16:23:43 · →

    I, too, am grateful for this thoughtful presentation and thank the task force for its work. I also appreciate the thoughtfulness of the responses. While I agree that we cannot know the future of AI’s development and potential (only the inevitability of development), it seems to me that its current iterations highlight and complicate issues that have been central to our discipline for decades. The risks of promoting disinformation when sources aren’t transparent, of widening the gap between the haves and the have nots (access to & cost & quality of the various programs—lessons learned during the pandemic), and of imposing models that privilege the culture of power at the expense of the marginalized. I’m troubled by the phrase “language of power” as that language has a history of bias and exclusion (and has served to rationalize injustice and more)—and is, by inference and prior examples, the language these programs employ. Will the not quite fully formed thoughts of students be shaped, blunted, skewed by the authoritative “support” AI offers? The notion that it may mimic (create?) voice is troubling as this suggests teachers have a standardized notion of what constitutes “the voice” of an apprentice writer based on age, place, ethnicity, and culture. For me, these difficulties far outweigh the instructor’s burden of parsing and responding to plagiarism as it has been traditionally defined. Like others, I’d like to see these risks made more explicit and see further recommendations for department policy makers and faculty for addressing these very real challenges

  34. Jeffrey Klausman
    Jeffrey Klausman · August 21, 2023 at 14:17:13 · →

    Thanks, everyone on the drafting committee, for your hard work on this–while I have quibbles with many statements (but only quibbles), my main concern is that of emphasis–by starting with “risks,” I think we play into the fear-based response we first saw when ChatGPT came out.

    Instead, why not start from a position of strength? Comp-rhet is still the most widely represented discipline on most college campuses (based on number of classes offered). What if this paper highlighted how we will leverage the power of AI to make our students better writers, thinkers, do-ers/agents in the world they are in and will be facing soon? You have a wonderful list of potential benefits here (but I might go more boldly and talk about how AI can/will likely revolutionize the entire reading-writing-thinking-knowing and teaching game)–how about start with the benefits and after acknowledge the risks? (The risks, by the way, may only truly be risks if we allow ourselves and our students to remain stuck in a 20th-century mindset of what literacy means. That discussion is for another venue, though.)

    Okay, thanks though–easy for me to sit back and make comments like this. I know the hard work necessary to manage all the political forces at play with a membership as large and diffuse as the MLA and CCCC.

  35. Stephen Calatrello
    Stephen Calatrello · August 22, 2023 at 23:09:09 · →

    All I can say, especially after having read through this paper, is that I am grateful to be in the twilight of my career and had the opportunity to teach writing and literature when I did.

  36. Theodoric E. Stier
    Theodoric E. Stier · September 11, 2023 at 07:45:39 · →

    A reasonable definition of AI for the purposes of education is “Any form of intelligence that a person demonstrates with the necessity of artificial tools.” This covers a lot of conventional sources of cheating in addition to usage of LLMs and knowledge graphs. A student is artificially intelligent if they demonstrate poorer domain intelligence when the artificial tools are removed.

    Now we can discuss the education value of AI in terms of a question like “Is a student who is artificially intelligent learning what we intend?”

Leave a Reply

Skip to toolbar