Here There be Dragons

Re-scripting Responses to Plagiarism After AI

by Emma King

Things have changed.

AI-generated work haunts classrooms, a force intricately or clumsily woven into essays and assignments. As an educator, I’ve been grappling with the novel necessity of learning to identify the subtle fingerprints of artificial intelligence, a skill that demands attention to the idiosyncrasies of language and style. Plagiarism, once a more-or-less straightforward act, has become more elusive, camouflaged by the intricate algorithms of machine-generated prose.

In response, I’ve chosen to re-affirm my extant philosophy of instruction, in which draconian responses have no place. The first offense is met not with punitive measures but with dialogue on both individual and classroom-wide levels. I start with a conversation that attempts to unravel the motives behind the use of AI, understanding the struggles that lead students to seek the aid of artificial minds. I envision these interactions as a bridge over the chasm of misunderstanding, a moment of connection rather than accusation.

Within this dialogue lies the key to unlocking the potential for growth. 

AI, after all, is not a forbidden fruit but a tool—an extension of our minds and bodies that we can use. Also, I suspect that prohibiting its use risks making students adept at circumventing such bans, veiling their actions in increasingly undetectable ways. The goal of many of these classes is to improve writing and critical thought, so why not use AI to help students? 

So, I embrace its presence as a catalyst for critical thought, a prompt for students to dissect the rhetorical choices made by the algorithm, to question its assumptions, and to engage in the age-old practice of literary interpretation.

Consider a student deciding to use AI to generate summaries or compile lists of themes. The challenge (assuming that I have either intentionally requested the use of AI or that this list has been identified as AI generated content) is not in dismissing the machine’s output outright but in prodding students to scrutinize its choices. Reflecting on the rhetorical nuances embedded in AI-generated work becomes a crucial exercise. Can they construct a compelling argument against its conclusions? The goal is not to reject the machine but to push beyond agreement, leveraging its output as a springboard for deeper analysis and nuanced understanding.

This way, mistakes, far from being stumbling blocks, are opportunities. Writing can feel like a formidable task, and becoming a proficient writer is hard. In recognition of this, I always try to offer tools and strategies—not as crutches but as supports to assist students in finding their initial foothold. AI is another such opportunity. In this context, the machine becomes a collaborator, analyzing and responding to student ideas, providing insights that serve as guideposts in their literary exploration.

Yet, I also know this: AI is not a substitute for genuine thought. It does not imbue us with the ability to think critically or construct compelling arguments. It is a tool, a means to an end, but never the end itself. The emphasis must be on instilling in students the understanding that technology can aid, but it cannot replace human intellect and creativity.

The heart of my approach lies in acknowledging the desires of my students; most want to be taken seriously, recognized as autonomous adults navigating the complexities of academia and careers. Responses to plagiarism or cheating are crafted with a delicate touch, mindful of these needs—especially since so many students lack context for what, exactly, plagiarism is. So, mistakes are met not with punitive measures but with the opportunity to learn what errors were made, to learn how to repair those errors—to rectify and grow.

One interaction stands out. I had a student who was so overwhelmed with the adjustment to college work that instead of seeking help, asking for an extension, or starting early, they resorted to AI to analyze a passage from N.K. Jemisin’s steampunk alternative history, “The Effluent Engine.” Daunted by close reading, clouded by insecurity, they wanted (maybe needed) a quick solution. The result was a clearly generated list:

This paragraph is a complex exchange that delves into themes of race, power, and manipulation. These are the most important components:

Interrogation and Challenge:

  • The opening line, “And then what?” sets the tone of skepticism and doubt. It suggests that the speaker is not easily convinced and demands further explanation or justification.

Concerns about Profit and Ownership:

  • The speaker expresses concerns about the patent and profit associated with a certain process. This indicates a focus on economic interests and a desire to understand who stands to gain from the discussed process.

Racial Undertones and Stereotypes:

  • The scowl and subsequent statements reveal underlying racial prejudices. The mention of a “comely face” and the reference to “dusky women” evoke historical stereotypes and colonialist attitudes, portraying the speaker as someone who views others through a lens of racial superiority.

Historical Reference and Guilt:

  • The speaker brings up the seduction of forefathers by “dusky women,” suggesting a historical guilt or perceived wrongs committed against the speaker’s ancestors. This adds a layer of complexity to the current situation, as if the speaker sees a connection between past actions and the present interaction.

Cultural Identity and Purity:

  • The speaker asserts the purity of their French blood, emphasizing a sense of cultural and racial superiority. This assertion, however, is juxtaposed with the derogatory reference to the speaker’s own Creole identity, indicating a conflicted relationship with their heritage.

Manipulation and Prejudice:

  • The speaker accuses the protagonist, Jessaline Dumonde, of being a tool for manipulation, suggesting that she could be sent to tempt someone for profit. This implies a lack of trust and reinforces racial stereotypes, portraying Jessaline as a potential seductress with ulterior motives.

Dehumanization and Generalization:

  • The concluding remark about coming “straight from the jungles of Africa” is particularly dehumanizing, reducing Jessaline to a stereotype and erasing the complexity of her individual identity. It reflects a broader tendency to generalize and stereotype people based on their racial or ethnic background.

In summary, this paragraph is rich in its exploration of racial dynamics, power struggles, and historical baggage. It brings attention to the characters’ identities, prejudices, and the complexities of their interactions, creating a tense and charged atmosphere within the narrative.

The structure and prose clearly indicated the presence of AI. Worse, it solidified my suspicions about three other students who submitted a trio of nearly identical attempts, taken out of list-form. These students had all leaned heavily on the same AI-generated prompt. The ideas in the analysis were relevant, but the lack of creativity, originality, and contextual insight reinforced to me that a tool like AI is only as good as its user; I would never give a novice a chisel and expect them to carve David. 

Engaging in a candid conversation, I steered away from punitive measures. Instead, I underscored the pitfalls of relying on AI—not just its detectability but its impact on intellectual and creative development. Relying so heavily on something that other people coded (and which has limited capacity for higher level or novel analysis) not only undermines our students’ intelligence and creativity, but limits their capacity to improve. That is, using AI carries the real possibility of eroding their nascent ability to question—to think, analyze, and elaborate. 

Our dialogue shifted from self-reproach (they were terrified of being punished, embarrassed to have been caught out, and upset by their apparent failure) to introspection. Why did they rely on ChatGPT? What about their own minds felt insufficient?

Then, I encouraged my students to interrogate the AI response. Why did the machine produce these words, in particular? What about those ideas seemed superior to their own? What evidence did it draw upon, and where did it fall short? What parts were effective for them and what left them feeling unfulfilled? How would they change it? What outside knowledge could they use to reinforce their ideas? What historical notions were played with and abandoned? How could they focus on one idea and explore it to fulfillment? It was an exercise in reclaiming agency, urging them to think critically, analyze, and elaborate. The emphasis was on questioning, prompting exploration of the intricacies of each line and the motivations behind it.

 Then, I asked them (as an exercise) to make a second attempt the assignment by asking similar questions of themselves. What patterns did they see in the original text? Why might those patterns exist? What ideas is the author playing with? How do they know? Guiding them through inquiries, we scrutinized the original text, identifying patterns and exploring concepts hinted at by Jemisin. The exercise went beyond the superficial, fostering an awareness of the layers that AI might overlook. The goal was not to undermine their confidence but to reinforce that AI is a valuable tool, not a surrogate for their intellectual prowess. The result was that these students gained not only a clearer understanding of the limits of AI, but also an awareness of how they, themselves, had ideas to offer. 

In essence, I sought to dismantle any sense of shame associated with resorting to AI and replace it with a foundation of confidence. The message was clear—AI is a resource, a means to an end, but not a replacement for the unique perspectives each student brings. The lesson extended beyond AI’s limitations, underscoring the importance of nurturing their capacity to question, think independently, and contribute original thoughts to the discourse.

While AI generate content is a challenge for educators, I’m of the mind that it is a usable challenge, a path forwards. At minimum, I want students to be able to use it better, more efficiently, and with more intention. If our students can improve on what the AI generates, that’s a huge start! This kind of interaction forces critical inquiry and engagement at a high level—even if it isn’t completely original. That is, revising and re-envisioning AI generated ideas and writing is an exciting new pathway to helping our students gain confidence with their own writing.

This incident with AI-generated prompts serves as a reminder that, as educators, our role is not to stifle progress but to guide its meaningful integration. My approach is rooted in understanding and mentorship, fostering an environment where students recognize the value of AI as a tool while embracing their own intellectual autonomy.

Still, the rapid tidal shifts have left me feeling a bit like I’ve stumbled into a classroom straight out of an Asimovian daydream. Our students, more cyborg thinkers than metal-clad androids, are now navigating the complexities of academia with AI flair. I imagine our students as the brainy, novice precursors of Asimov’s androids—no three laws to adhere to, but certainly some unwritten academic codes: think critically, question fearlessly, and remain open to novel modes of thought.

I hope to retain this openness myself, to become a better mentor as I navigate the pitfalls of educating alongside the rise AI with a sense of humor and empathy. As I encounter our unsure cyborg-scholars, I intend to welcome the melding of human intellect and artificial intelligence. AI isn’t the villain in this story; it’s the trusty sidekick, the Watson to our Sherlock, the insightful quip or prompt that adds a layer of reflexive insight in our own recursive attempts to learn.