Three pieces this week to counter the dangerous, headlong, blindfolded rush into accepting commercial generative AI products as if they were inevitable (they’re not):
- Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers – no, obviously, ChatGPT doesn’t know if it wrote your students’ papers. It’s not intelligent. It’s a parlor trick that predicts text on the fly. Think chattering monkeys with a short working memory. Hard to imagine cases like this won’t end up in a very interesting court case, but hats off to Rolling Stone for the best attributive phrase of the year: ” a campus rodeo instructor who also teaches agricultural classes.”
- Moving slowly and fixing things – We should not rush headlong into using generative AI in classrooms – from the always excellent LSE Impact blog. Debunks the “but what about calculators?” false analogy ( “It’s like comparing an abacus and a quantum computer”) and raises ethical questions about bias, plagiarism (OpenAI’s not students’), and equity.
- Different field, but: TV writer David Simon weighs in on the Writers Guild of America strike – Simon wrote “The Wire”, so maybe he knows a thing or two about his craft, and certainly more than Ari Shapiro, who for some reason decides to play the role of naive AI-bro: SHAPIRO: So would you ever agree to a contract that saw any role for AI at all? SIMON: No. I would not. SHAPIRO: Huh. SIMON: If that’s where this industry is going, it’s going to infantilize itself. We’re all going to be watching stuff we’ve watched before, only worse. (Substitute classroom materials for TV scripts in this exchange and think about whether we want our labor to be devalued and our teaching to stagnate like this.)
Just because a technology exists does not mean (a) we have to use it; (b) we have to use it in education; or (c) it will inevitably become part of daily life. Instead we need to educate ourselves and our students about the risks of this technology and the benefits of doing the actual work.