The AI Teaching Advantage

Reimagining Writing in the Age of AI: A Thoughtful Response to Skepticism

By: Bob Trower & Genna (AI assistant)

The recent chorus of voices expressing concern about the encroachment of generative AI into writing studies is not without merit. There are legitimate worries about authorship, authenticity, corporate influence, and environmental cost. But these are not reasons to reject AI outright. Rather, they invite a deeper, more nuanced conversation about how best to shape its role within education and society.

This piece seeks to offer a reasoned response to critiques such as those raised in a recent essay questioning AI's place in creative writing and composition pedagogy. Our goal is not to confront, but to engage—with respect, reflection, and a commitment to shared values: the cultivation of imagination, ethical integrity, and human flourishing.


1. Creativity and the Role of Machines

A common objection is that writing is a uniquely human, imaginative act, and that using AI to assist or generate text diminishes that essence. But creativity has never been a solitary, pristine endeavor. Writers have always borrowed, iterated, echoed, and transformed. As Kirschenbaum (2016) shows, even the transition to word processors was met with suspicion. Yet, over time, those tools became integral to modern literary production.

AI, particularly large language models, operates through probabilistic recombination. It generates based on learned patterns—not so different from how humans internalize genre, style, and structure. Chiang (2023) describes AI output as a form of advanced compression rather than true thought. Still, this "compression" can spark ideas, accelerate iteration, and help students find their voice by offering models to push against.


2. The Ethics of Data and Intellectual Property

There is rightful concern about how generative models are trained, particularly regarding consent and copyright. These issues deserve scrutiny. But it is inaccurate to claim that no ethical pathway exists. Open-source models (e.g., Mistral, OpenChat) offer more transparency. Organizations like LAION and Hugging Face are working on consent-aware datasets.

Educators can and should teach students to interrogate the provenance of AI output. Just as we teach source criticism in research, we can teach model criticism in generative text. Students should learn when to use AI, how to contextualize its limitations, and why attribution, citation, and ethical reflection matter (European Commission, 2023).


3. Environmental Costs and Corporate Control

AI models are resource-intensive to train, but so is the modern university. Flying to conferences, powering campuses, and maintaining online systems also have carbon footprints. Importantly, model inference (i.e., using a trained model) is far less energy-intensive than training and is improving rapidly (Stanford HAI, 2023).

We should be concerned about centralization. But abandoning the field won't stop that trend—it will only cede influence to those less ethically inclined. Teaching AI critically within the academy gives us a chance to shape its trajectory (DAIR Institute, 2023).


4. Plagiarism and Pedagogical Integrity

Worries about AI-fueled plagiarism are valid, but not unique. Students have long found ways to shortcut learning. The real solution lies in pedagogy. When assignments ask students to synthesize, reflect, and create with personal stakes, AI alone won’t suffice. And when AI is allowed transparently, it becomes part of the learning process rather than a means to avoid it (Fyfe, 2023).

As Mollick (2023) has shown, AI can help students start rather than finish their thinking. It can be a brainstorming partner, a language coach, a structural sounding board. But this requires explicit instruction, not prohibition.


5. Creators Will Be Replaced—By People Using AI

The truth is that creators of all kinds—writers, designers, developers—will not be replaced by AI. But they will be replaced by people using AI. In competitive contexts, skill with these tools becomes a multiplier.

Some rare individuals may continue to outperform both AI and AI-assisted creators. But basing pedagogy on that narrow possibility means disadvantaging the overwhelming majority. Our responsibility in education is not to chase unicorns, but to equip everyone with tools for success. Teaching AI literacy is not about surrendering to the machine—it’s about leveling the playing field and preparing students for a future that is already arriving (Mollick, 2023).


6. Equipping Students for the Future

To discourage students from learning how to work with AI is to deny them fluency in a tool that will likely shape their future professional landscape. Just as calculators did not destroy mathematics, AI will not destroy writing. But it will change it.

Our role is not to block the future but to prepare students for it. That means teaching them not just to use AI, but to critique it, shape it, and engage with it ethically.


Conclusion: Holding the Line Without Digging Trenches

Rejecting AI entirely in writing studies may feel principled, but it risks becoming reactionary. Instead, we can hold the line on what matters—human imagination, ethical rigor, and intellectual honesty—while adapting to new tools.

Higher education should not be the last bastion of nostalgia. It should be the first place where new tools are interrogated, refined, and repurposed for the good of all.

Let us not go gentle into that good night of uncritical adoption or unthinking refusal. Let us think—together.


References

Chiang, T. (2023). Will A.I. Become the New McKinsey? The New Yorker. https://www.newyorker.com/news/daily-comment/will-ai-become-the-new-mckinsey

DAIR Institute. (2023). Distributed Artificial Intelligence Research Institute. https://dair-institute.org

European Commission. (2023). Ethical Guidelines on the Use of Generative AI. https://digital-strategy.ec.europa.eu/en/library/report-generative-ai-ethics

Fyfe, P. (2023). How Not to Detect AI-Generated Text. Inside Higher Ed. https://www.insidehighered.com/opinion/views/2023/03/21/how-not-detect-ai-generated-text-opinion

Kirschenbaum, M. (2016). Track Changes: A Literary History of Word Processing. Harvard University Press. https://www.hup.harvard.edu/books/9780674417076

Mollick, E. (2023). One Useful Thing. https://www.oneusefulthing.org

Stanford HAI. (2023). AI Index Report 2023. https://aiindex.stanford.edu/report/

 

Comments

Popular posts from this blog

Javascript webp to png converter

Received Development Methodology

Our education system is broken