GenAI and Student Writing: Additional Considerations for Multilingual Writers

Author
Daniel Emery

Generative AI has created some challenging new realities for instructors. While initial concerns focused on academic integrity and the risks of GenAI being used as a replacement for student work, emerging concerns are developing around the ways students may be differently impacted by these advances. Last fall, Indiana University was subject to media scrutiny and a lawsuit after its AI detection technology, Turnitin.com, systematically produced false positive results for work submitted by multilingual students. In this final blog post of the semester, we’ll think about some critical differences in using GenAI among multilingual and monolingual students, the challenges of a worldwide proliferation of AI tools, and some strategies for treating students equitably and fairly with GenAI course policies.

How Multilingual Writers Use GenAI

Large grey statute depicting Rosetta Stone.

Many writers are experimenting with Generative AI as a writing tool, but how they use these tools may differ based on their linguistic background and prior educational experiences. While domestic students often identify using GenAI for generating ideas and brainstorming, multilingual students identify proofreading, translation, and rewriting for conciseness as their most common use of AI tools. Students receiving more frequent instructor comments on their documents related to grammatical writing and editing are likelier to use Generative AI tools to detect less conventional usage, errors, and omissions. Indeed, some faculty members encourage using these tools to “clean up” distracting mechanical errors.

Because these student populations use these tools for different purposes, restrictions on GenAI use have different consequences. Consider a course policy that allows for using Generative AI in prewriting (brainstorming, etc.) but restricts AI-generated text use in final submissions. In that case, an instructor may inadvertently advantage students with strong English language experience and disadvantage those students without it.

Warshauer et al. (2023) note that multilingual academics writing in English often face additional challenges in presenting and publishing research, requiring additional time, resources, and money to secure assistance with copyediting and English conventions. For multilingual student writers, AI tools promise an effective and efficient way to mitigate some of those challenges. At the same time, their status as novice academic writers may mean they are less adept at revising the output from GenAI and more likely to be fooled by the authoritative-sounding hallucinations common to AI-generated text.

GenAI Detection and the Problem of Technical Solutions

Most faculty know that the technologies used to detect text generated by artificial intelligence can produce false positive and false negative results. A group of Stanford researchers has demonstrated a more significant and troubling finding: AI detection software systematically reports more false positives when considering writing from non-native English speakers (NNS). Liang et al (2023) tested seven “detection software” packages against sample sets of student-authored writing. Detectors misidentified at least 20% of responses written by NNS as AI-generated, with the worst platform marking almost 98% as AI-composed.

While the detection algorithms are often proprietary and unavailable for scrutiny, some likely causes of this elevated flagging level are word choice and sentence length. Students with a more limited English vocabulary may make fewer unexpected or novel word choices. Similarly, multilingual students may systematically use shorter sentences with more superficial syntactic structures or use conventional sentence stems to express themselves in English. Consequently, students with less experience with English writing (or are perceived to be less fluent) are the most vulnerable to misidentification.

Recommendation: Make AI Use a Topic of Conversation

Policy recommendations for Generative AI put instructors in the driver’s seat to determine their appropriate use. At the same time, facilitating a discussion of AI use in your classroom may help students understand the reasons behind a course policy and its importance to their academic performance. In courses where grammatical writing and editing are less emphasized, it can be valuable to remind students that they need not “clean up” their writing with a generative AI tool (although spell check is still a good idea). In courses where these issues are paramount, emphasize why a tight standard is professionally appropriate or expected (such as the AP style rules of journalism or the need for precise labeling in medical laboratory sciences).

In courses where GenAI use is permissible, ask students to discuss which tools they are using and what they hope the tool might provide. As students note the stages in their writing processes where they use GenAI tools, you can identify the affordances and drawbacks of their use. It may be acceptable for a bioinformatics student to use Microsoft Copilot to generate Python code for a particular project application, as the coding work is secondary to other learning outcomes. On the other hand, if a student uses Google Bard to identify peer-reviewed research in their topic area, you might direct them to a subject librarian as a more effective alternative. By engaging students in a dialogue about their writing processes, it becomes easier to highlight the important cognitive work of an assignment and the learning goals for the course.

References

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4, 100779. https://doi.org/10.1016/j.patter.2023.100779

Sebastian, F & Baron, R. (2024). Artificial intelligence: what the future holds for multilingual authors and editing professionals. Science Editor, 47. https://doi.org/10.36591/SE-D-4702-01

Warschauer, M., Tseng, W., Yim, S., Webster, T., Jacob, S., Du, Q., & Tate, T. (2023). The Affordances and Contradictions of AI-generated text for writers of English as a second or foreign language.  Journal of Second Language Writing, 62. https://doi.org/10.1016/j.jslw.2023.101071

Further Support

Check out our Teaching Resources on the TWW website for even more information about teaching with writing. Our WAC program hosts the popular Teaching with Writing event series, which will return in the fall of 2024. In August, we will offer the Faculty Teaching with Writing Seminar and two workshops for graduate students working with student writing.

The WAC program is open 12 months a year! Contact us to schedule a phone, email, or face-to-face teaching consultation.