Restricting ChatGPT Use in Classes with Writing Assignments: Policies, Syllabus Statements, and Recommendations

One of the immediate concerns that emerged with ChatGPT and other Large Language Models relates to consequences for students who are assigned writing tasks. Whether writing is used to assist thinking and learning, to measure student understanding, or to reflect on student learning, AI-generated text can perform reasonably well on some writing tasks assigned to university students. Colleagues at the University of Minnesota Law School demonstrated that ChatGPT-generated responses to exam questions would meet minimum standards for student performance. Although generative AI technologies are rapidly integrating with popular writing platforms (including Google and Microsoft), instructors may wish to exclude their use for submitting academic work in their courses.

Policy Language for Restricting the Use of ChatGPT in Undergraduate Courses

The Office of the Provost Provides the following Syllabus statement for faculty members and instructors who wish to prohibit the use of Generative AI in student work (see all three recommendations here).

The Board of Regents Student Conduct Code states the following in Section IV, Subd.1: Scholastic Dishonesty:

"Scholastic dishonesty means plagiarism; cheating on assignments or examinations, including the unauthorized use of online learning support and testing platforms; engaging in unauthorized collaboration on academic work, including the posting of student-generated coursework on online learning support and testing platforms not approved for the specific course in question; taking, acquiring, or using course materials without faculty permission, including the posting of faculty-provided course materials on online learning and testing platforms; ..."

Artificial intelligence (AI) language models, such as ChatGPT, and online assignment help tools, such as Chegg®, are examples of online learning support platforms: they can not be used for course assignments except as explicitly authorized by the instructor. The following actions are prohibited in this course:

  • Submitting all or any part of an assignment statement to an online learning support platform;
  • Incorporating any part of an AI-generated response in an assignment;
  • Using AI to brainstorm, formulate arguments, or template ideas for assignments;
  • Using AI to summarize or contextualize source materials;
  • Submitting your own work for this class to an online learning support platform for iteration or improvement.

If you are in doubt as to whether you are using an online learning support platform appropriately in this course, I encourage you to discuss your situation with me.

Any assignment content composed by any resource other than you, regardless of whether that resource is human or digital, must be attributed to the source through proper citation. Unattributed use of online learning support platforms and unauthorized sharing of instructional property are forms of scholastic dishonesty and will be treated as such.

Essential considerations for prohibitive policies

AI platforms and tools come in various forms: 

While many of us are becoming familiar with ChatGPT from OpenAI, other large language models (Google’s Bard, Facebook/Meta’s Llama, Jasper, Ryter) will also generate written content. As a consequence of these varied platforms, detection can be more complicated.

AI detection platforms generate false positives and false negatives: 

OpenAI has recently discontinued its proprietary detection tool after determining that its false positive and false negative rates were unacceptably high. Free, proprietary models like ZeroGPT note that false positives and negatives remain common and that it is not always a reliable indicator of AI use. The Office of Information Technology can answer questions about the suitability of Turnitin as an AI detection tool.

AI tools are already incorporated in common writing and editing platforms: 

The newest versions of Microsoft’s 365 suite (including Word, Excel, and Access) and Grammarly, a popular cloud-based proofreading tool, have AI built in. Students may need clarification on whether these built-in tools (Microsoft’s CoPilot and Microsoft Editor/GrammarlyGO) are allowable or restricted. Because these technologies are emergent and unfamiliar, an AI tool may be incorporated without a student’s knowledge.

Many multilingual writers incorporate translation and editing tools into their writing processes:

Many writers fluent in multiple languages may use translation tools, online dictionaries, and other technologies to address surface and sentence-level errors in their English writing. While these tools may not use artificial intelligence or large language models, some users have confirmed that the text generated by these tools can be flagged as AI-generated.

Students who use assistive technologies may already use AI-assisted tools: 

Synthetic speech technologies, automated transcription tools, descriptive audio, and other assistive technologies rely on the same natural language processing tools as generative AI. At a minimum, assistive technologies covered by academic accommodation should be exempted from AI-prohibitive policies, and students should be encouraged to use adaptive and assistive technologies that guarantee equitable participation in course activities.

Course and assignment design strategies to support academic integrity

Include an academic integrity statement in your syllabus:

Instructors can include references to policy and specific explanations for why generative AI use is prohibited in a course. For instance, if a course goal is to help students recognize a discipline's genre conventions or academic vocabulary, allowing a tool to make language and writing choices would interfere with students’ learning. Similarly, AI-generated submissions in courses requiring creative expression may defeat a course's learning objectives.

Design assignments that require selected course materials and media: 
Scenery with a bridge and icy waterfall

Because AI tools scrape large swaths of writing from across the publicly available internet, they are more prone to succeed in generalizations than narrowly constructed, focused assignments that require interaction and interpretation of particular readings or materials. Inaccurate or hallucinated citations and references can often reveal generative AI output (for now).

Design assignments that draw from unique classroom experiences: 

Tasks that draw from class discussions and activities are harder to emulate for large language models. Requiring specific details or references to shared classroom experiences can make assignments more challenging to emulate.

Incorporate opportunities for drafting and revision in writing assignments: 

For larger or more important writing assignments, ask students to submit early evidence of their writing processes and promote using drafts. Topic proposals, initial hypotheses, preliminary methods sections, and other scaffolded assignments help students build the competence and confidence to do their academic work. Similarly, open-ended questions in response to early drafts (particularly those that probe for greater detail, specificity, or omitted information) can engage students in writing and learning processes.

Gather early samples of student writing: 

Brief, informal, and in-class writing exercises can help student learning and establish students’ tone and style. In small-enrollment courses or courses with multiple instructors, familiarity with students' writing choices and habits can make it easier to detect something generated by an AI tool. Among the notable but perhaps not readily observable features of Generative AI text production emerges in sentence and paragraph length. Heather Desare and associates (2023) noted that human writers often produce longer paragraphs, and are significantly more likely to use long sentences (over 35 words) and short sentences (under 10 words).

Require students to describe their writing processes in addition to producing written documents:

Because generative AI output relies on statistical processes and inferences, the choices of what, how, and why particular terms, examples, and information are selected and included are always clear. Asking students to explain their writing choices and the resources used to generate appropriate answers can help reveal the human agents behind well-written documents. Our Teaching with Writing blog describes revision plans and memos as two forms of this process-focused extras.