Refining GenAI Policies: Three Questions to Initiate Conversations

Author
Daniel Emery

The Provost’s Office and the Senate Committee on Educational Policy at the University of Minnesota provided strong early guidance for instructors on student use of ChatGPT and other GenAI technologies in classrooms (Embrace, allow, and prohibit). Since the publication of those original policies, the landscape of GenAI tools and the quality of GenAI outputs has changed dramatically. GenAI is pervasive in personal, professional, and academic contexts, and educators and technologists reluctantly admit that GenAI use is challenging to detect. Nevertheless, instructors have several options for allowable uses of GenAI tools and can invite meaningful conversations with students about writing in their fields by exploring the benefits and risks of these technologies. This blog post will introduce three next-step questions to consider regarding the use of GenAI in your courses.

What does it mean to “allow limited usage of GenAI”?

Most faculty and instructors have settled on an “allow” policy as a compromise between the potential free-for-all of an “embrace” policy and the potentially heavy-handed, difficult-to-enforce choice of “prohibition.” Nevertheless, essential considerations follow from when and how GenAI use is encouraged, permitted, or restricted in “allow” contexts. 

Snowy sunset landscape

For example, it may be tempting to allow GenAI use on low-credit, small-scale assignments, but restrict use on high-stakes, high-value assignments and assessments. Such a policy emphasizes more human effort where it counts and less effort when the stakes are lower. Perhaps surprisingly, however, this policy may work against the course's learning goals and students' learning processes when considered from a writing-to-learn perspective.

Small-scale assignments may count less in the gradebook, but they may matter more as opportunities for learning. Discussion posts, in-class writing, reading journals, and other small assignments provide space for authentic student thinking, partly because small-stakes assignments don’t penalize risk-taking, speculation, or honest confusion. Additionally, short student responses require less time for reading and grading and, consequently, more time to offer formative feedback or to consult with students or groups directly. Meta-analyses of pedagogical research illustrate that early feedback produces the most significant changes in conceptual knowledge and procedural understanding and allows space for growth through repetition and revision. For those reasons, instructors might consider preserving small assignments as spaces for guide-to-novice person-to-person communication without the intrusion of GenAI.

By contrast, large assignments often include multiple steps for production, and high-stakes assignments often demand the most attention to surface features of writing (grammar, usage, spelling, punctuation, citation). These features might provide ideal circumstances for GenAI augmentation. Early in the process, a student could consult a GenAI tool to assist in identifying relevant keywords or to find ways to narrow their research focus. They might use AI research tools to help determine appropriate resources and provide summaries of complex research articles to support the students' reading comprehension. Finally, students might use generative AI to assist with sentence-level feedback or to encourage conciseness. Ultimately, assignments' learning goals and purposes should lead instructors to where, when, and how AI tools might be used.

What counts as “Incorporating any part of an AI-generated response in an assignment”?

Many instructors oppose GenAI use in their courses because they see little value in commenting upon AI-generated finished writing. After all, what's the value of providing feedback unless a human author is behind the writing? At the same time, it is critically important to distinguish the use of GenAI for writing and “incorporating any AI-generated response” in an assignment, as is described in the “prohibit” context.

For example, permitting AI use early in the writing process may shortcut learning dramatically even if no AI-generated writing appears in a finished document. Although many instructors recommend AI for brainstorming and prewriting, using Generative AI for creating article summaries potentially impedes students' development as writers in several ways. GenAI tools regularly produce hallucinations and confabulations, so any AI-generated summary is a potential minefield of random word associations or obscure connections based on unique lexical features. Despite GenAI’s well-advertized proclivity to fantasy, students' misplaced trust in their AI tools might lead them to forgo the additional reading labor in favor of the easy shortcut of a weak AI summary. Finally, the cognitive activity of unpacking a complex text helps students recognize how writing works in their particular academic fields. Missing that chance to read good writing can make finding a personal and professional voice much more challenging. A student’s fully human writing about AI-created summaries could be inferior to an AI-assisted response to a student's personally generated outline and overview of a complex research paper.

Conversely, using AI near the end of the writing process could offer multiple opportunities for rich learning. AI tools like Grammarly GO can help identify surface-level errors and gently introduce students to features of academic voice and register. This assistance may be invaluable for multilingual writers or students whose home dialects are undervalued in formal educational contexts. GenAI tools can also help students identify features of effectiveness in their writing or conventions that may be particular to a writing context (e.g., when to use a serial comma [Often, per the APA style guide, but never, according to the Associated Press]). Offering the correct qualifier, an appropriate non-count adjective, or a missing comma could make AI a valuable sidekick for writers completing their work.

How do we understand “with appropriate citation”?

Even in the “embrace” case, the opportunity to incorporate AI in writing rests upon a caveat regarding citation. Unfortunately, accurate attribution of AI content is much more complex than simply adding a parenthetical citation. Unlike citing sentences and paragraphs from sources, AI attribution could include AI assistance in reading, research, outlining, drafting, and editing. It may also include sentence-level editing or word choice recommendations. Metaphorically, traditional citational attribution might resemble breadcrumbs on a trail or chips in a cookie: generally recognizable, uniform, and distinguishable from context. AI attribution in the same form might be like tagging certain bits of gravel that make up a highway, with some on the surface, some providing structure, and others that exist in a substrate, invisible to most users. A general attribution statement often replaces this cumbersome and impractical individual tagging, but even this comes with unintended risks.

Concerningly, AI attribution also introduces biases in grading. Just published research in Applied Linguistics showed the mere disclosure of AI use in the academic writing of multilingual students depresses students' scores on their written work. Disclosure of AI use for translation and editing may convey to instructors that the intellectual merit of a piece of writing should be attributed to the AI, not the student writer. The cruel irony may be obvious—well-intentioned students who disclose their AI use may receive lower scores than others, including those who use GenAI tools but violate institutional policy by failing to disclose.

A hopeful coda: Conversations about values are better than policy

The challenges posed by these questions might lead a well-intentioned instructor to anxiety, despair, or frustration: "Now you tell me? Where were you in December? I’m stuck with a policy with unintended consequences.” As the three questions above illustrate, each policy choice creates complications and invites challenging edge cases, whether embracing, allowing, or prohibiting.

The very good news is that whatever the formal policies of your syllabus, you have a great deal of room to influence your student's choices. Just because students might be allowed to use AI on small assignments doesn’t mean you can’t illustrate why it may be a bad idea. Just because your AI policy doesn’t discuss AI for reading (or translation) doesn’t preclude you from talking about these practices and reiterating the purpose and value of writing in your course. You can reach shared agreements by thinking with your students and making your values transparent. When students are comfortable being transparent about their use of AI, and instructors are open to students' efforts to use them in good faith, we all gain greater clarity on how writing is changing in the AI era.

References

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.

Tan, X., Wang, C., and Xy, W. (2025) To disclose or not to disclose: Exploring the risk of being transparent about GenAI use in second language writing. Applied Linguistics.

Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3014.

The Writing Across the Curriculum Team is happy to consult with you on ways to craft assignments, assessments, and course policies. We also eagerly invite your comments below regarding your solution to AI conundrums or strategies for maintaining academic integrity in the age of AI.