Between all the hype and doom-mongering over AI text generators (ChatGPT and their ilk), there is a blunt reality: these products and the profit-seeking corporations that market them are not our friends and they have no place in education at this time.
Let’s be clear, the purpose of these products (for that’s what they are, not tools) is to extract data and therefore value from users. Will they produce plausible, amusing, alarming, and sometimes accidentally useful texts along the way? I don’t care, but your mileage may vary.
Here’s the only thing you need to know about today’s generative AI: it is agnostic to truth. The texts it generates are not true, false, fake, misleading, or even “bullshit” in its philosophical sense because they are produced without intent, without any concern for telling, revealing, hiding, or hindering the truth. Sometimes they appear to be accurate, sometimes not, but fundamentally the algorithm isn’t programmed to know or care. This is a hard-stop for me. In Aleksandr Tiulkanov’s model, I simply can’t past the first question: Does it matter if the output is true? Yes or no?

Source: @shadbush on Twitter
One metaphor that’s bouncing around Twitter this week is a cake. We have a choice, right?, to buy a Twinkie, use a cake-mix, or bake a cake from scratch. In this metaphor, AI is supposedly the cake-mix that we can take and bake with little effort, resulting in a mediocre if not outstanding cake. But it’s not. It’s a cake-mix delivered in black box with no label, no list of ingredients, no instructions for use, and no responsibility over whether it tastes good or gives you food poisoning. It could be a box of iron fillings for all you know and for all its vendors care. And here’s the kicker: you only know by signing up with your private data, possibly paying a premium, and tasting it. No thanks.
So, here’s what I’m not doing (yet):
- Encouraging my students to use ChatGPT, compare it to their own writing, brainstorm, write first drafts, or solicit feedback. Asking them to sign up for these products violates my sense of ethics. There’s no short-cut to doing the work yourself.
- Surveilling, detecting, or punishing AI use. I discourage the use of generative AI and ask students to declare if and how they used it. But my assignments are scaffolded, relevant, self-reflective, and flexible, and I respond with feedback not grades, so they incentivize human writing.
- Giving students feedback with AI. They paid for human feedback, and that’s what they’re getting.
- Generating classroom or professional materials with AI. I don’t need to: they don’t add any functionality I don’t already have, and it’s not worth the hassle of checking their work.
- Buying into the hype about time-saving. I’ve heard that tune before. Efficiency is less interesting to me than effectiveness. I am not running a factory line: I’m teaching.
The rush to incorporate AI into everything is ill-considered. The harm that these products could do (to education, to democracy, to communication) is massive. The benefits are untested. And there’s simply no need for education to leap onboard the bandwagon just because it’s there. If we value process over product, thinking over generating, effectiveness over efficiency, and humanity over automation, then we can and should politely say “no thanks.”