Is Peer Response Still Relevant in the Era of Generative AI Feedback?

Author
Daniel Emery
Jessa Wood
Estimated Reading Time
7 minutes

Recently, many of us have seen Gemini integrations pop up on University Google platforms. Tech companies rolling out consequential changes to GenAI tools, access, and capabilities seem to pay little attention to the semester calendar or our existing assignments, assessments, and course policies. These integrations vastly reduce the friction of using GenAI tools, making it easier and more automatic for writers to use GenAIs. As these tools are integrated directly into word processing and media creation programs, avoiding AI—whether out of personal conviction or in deference to instructor policy—requires specific effort and attention.

Side-by-side images of students working with writing.
This AI-generated image contrasts two writing scenes: on the left, a student interacts with her laptop while seated across from another student in a Minnesota sweatshirt, also using a laptop. Both students have serious expressions and keep their eyes on their laptops. The right side shows two students interacting at a table, with a printed document between them, one of whom appears to be offering feedback to the other. Both students are smiling.

As instructors who promote learning through writing, we benefit from considering for ourselves, and discussing with students, what might be gained and lost with the ubiquity of new technologies. This blog post addresses a small example of a widely available/nearly unavoidable tool: Google’s Writing Editor Gem in Gemini. Like spelling and grammar tools built into earlier word processors, Writing Editor is only a click away for most students. Unlike these previous tools, however, Generative AI tools are dramatically more likely to recommend significant changes to sentence construction, or even insert themselves into students' writing processes (like an even more aggressive Microsoft Clippy).

Given our growing access to near-instant feedback, students and even instructors may wonder: why incorporate familiar writing feedback activities like peer review when AI tools appear faster and more efficient?

In this blog post, we’ll explore some potential benefits of the Gemini Writing Editor tool and contrast it with the research-backed classroom feedback practice of peer response. For those open to AI tools, Google’s Writing Editor can provide a helpful supplement to peer review—but can’t simulate or replace its core benefits. Talking openly with students about the benefits, drawbacks, and differences among these sources of feedback can provide a fruitful opportunity to help students make thoughtful choices about their writing process and continue to engage productively with their peers’ writing.

Google’s Writing Editor: Quick, Conventional, and Customizable

While many users are most familiar with free versions of AI chatbots, experienced users are much more likely to employ specialized and paid models—like those available to UMN users through Google Gemini’s suite of add-on Gems—that often produce far more accurate and targeted responses. The Gemini Writing Editor is one such tool. Available to those who opt in to the Enterprise Version of Google Gemini, the Writing Editor is accessible under the Gems heading in the left column (one of several “standard” bots, including a chess tutor and coding assistant). Unlike the spelling and grammar checks in the Microsoft suite, the Writing Editor’s feedback can be guided and shaped by student users through prompts. As with most prompting activities, the more detailed and specific the prompt, the more effective and focused the output. (Additional advice on prompting is available from a range of sources at UMN, other institutions, and Google).

Feedback from the Writing Editor is almost immediate, and its recommendations often conform to familiar expectations of publicly available, published writing. The size of Google’s training set and the sophistication of its algorithmic parameters means its genAI tools can be very effective for sentence-level editing. Instructors can also model customizations to the editor based on their expert knowledge of the conventions of writing in their fields.

As with any AI output, advice from the Writing Editor is subject to the platform's limitations and biases and may inadvertently be generic, idiosyncratic, or inappropriate. In addition, AI tools may struggle with highly specific, disciplinary uses of otherwise conventional terms (e.g., effect, significance, uncertainty, random). Some users also note that the manufactured cheeriness of AI feedback, designed to keep us online and engaged, can create an inappropriate impression of success.

Despite these limitations, Writing Editor’s low barriers to student access, potential for useful suggestions, and the opportunities for students to spend more “time on task” with their writing make it a useful tool in some courses.

Peer Response Feedback: Contextual, Engaged, and Reciprocal

Fellow students are not as easily available or as quick as Generative AI, and lack the terabytes of comparison documents that power Large Language Models. At the same time, peers offer several advantages that GenAI tools find difficult to simulate.

First and foremost, peer feedback involves interactions among students (real humans!) who share a common course context, experiences, and materials. Peers in a course base their comments on shared and developing understanding, and this shared engagement makes their feedback and investment authentic and motivating to writers. Recent research from MIT compared students' revision activities when they were told feedback was generated by a human reader versus told feedback was generated by an LLM. When they believed the feedback was written by human readers, students spent more time on task, revised more deeply, and reported greater satisfaction with their finished drafts.

The interactions students have with peers differ from those they’re socialized to have with digital tools. Without very careful prompting on tone and style, GenAI output will typically be evaluative and corrective, while good response activities encourage peers to respond descriptively and comparatively. The near-instantaneous direct correction approach exemplified in by GenAI—combined with the tendency of users to view powerful technologies as near-omniscient—means many students might not challenge the prescriptions of an AI tool in the same collaborative and dialogic ways they are likely to engage with peers in a well-designed peer response activity.    

Finally, research on peer response (Hart-Davidson and Graham Meeks, 2017) has shown that students learn as much from reviewing other students’ drafts as from the feedback they receive. The opportunity to review additional in-progress writing that addresses the same prompt can help students recognize features of success or opportunities for improvement in their own texts.

In short, well-designed peer review experiences provide a kind of interaction with writing that can’t be replicated with genAI tools. Sharing these differences between good peer reviews and genAI outputs with students can help focus their peer response energy on high-level features of writing to which they can most fruitfully respond and may discourage them from attempting to outsource their own peer review work to GenAI tools.

Augmentation, not Automation: Consider Different Modes for Different Goals

Just as peer response confers benefits distinct from instructor feedback, GenAI feedback can provide additional information to student writers and encourage intentional choices for when scaffolded appropriately. In each case, the value of feedback emerges in how student writers engage with what they learn. Regardless of the mode of feedback, students benefit from opportunities to reflect upon and react to input on their writing in progress. To make any feedback more effective, consider inviting students to complete these ‘closing-the-loop’ tasks to make their thinking visible:

  • Identify features of valuable, actionable feedback to share with human reviewers
  • Provide a summary of the key insights from the feedback (in writing or a short video) to the instructor, who can compare students' interpretation to the AI output they receive.
  • Identify their priorities for revision and construct a revision plan
  • Include a cover letter or revision memo along with the final submission

If you’d like to talk more about genAI in your course contexts, consider attending our upcoming workshops, stopping in for one of our teaching hunkers, or booking a consultation with a WAC team member.

We love learning from our dedicated WAC Update and blog readers (and thank you for being here!). We invite you to share below: how are you fostering critical conversations and approaches to genAI tools with your students?


Image Credit: This image was created using Google Gemini (Enterprise Edition) on 2/26/2026, prompted by the author.

Comment

moses004
Permalink

An ability to provide valuable feedback remains an essential form of cognitive apprenticeship. When we teach students how to provide targeted, relevant feedback based on learning objectives and assessment criteria, they build competence in critical thinking, problem-solving, teamwork, leadership, and communication. And the better we become at teaching students how to give feedback, the better we become at training AI to engage writers in timely, thought-provoking dialogue about their work.