Welcome to the Prose Slushie Machine

Amy Bruckman
2 min readDec 18, 2022

Thoughts on Chat GPT

The democratization of access to calculators must have been scary for teachers. “Are my students cheating? I guess I’ll have to do more in-class testing to make sure they know what they’re doing. Do I really have to teach long division with decimals any more? Maybe it’s OK if the students understand the underlying principles, and know when to use a calculator? How do I know that they understand the base principles well enough, or if they’re just blindly relying on the machine?”

In the end, the academy adapted. Today the SAT math exam has two parts: with calculator and without. Both parts are significant — what you can do with the tool is as important as what you can do without out. In the not-distant future, writing exams will have to be the same. What you can do with help from generative AI is as important as what you can do without out.

There’s an important difference, though. Answers to math problems are provably right or wrong. Errors can be detected and corrected, and the final answer should be the same whether generated by human or machine. In writing, on the other hand, the quality of the answer is open to interpretation. Judging what answer is superior (for what audience) is typically subjective.

AI writing is built on the content that it ingests. The output is only as good as its training data. If our training data has biased ideas, so will the output. If you ask Chat GPT about a controversial issue, the results typically are a bland exercise in “both-sides-ism.” It reflects what we already believe, and therefore reinforces commonly held views.

We can view the worldwide swell of AI-generated prose as a giant slushie machine that mixes the same base content and squirts out a sample of common and accepted opinion. A human contribution can have its own texture and flavor. A human contribution can also transcend both-sidesism and make a compelling argument for something important.

As a reader, I’m sad about how many low-quality slushies I am personally going to end up ingesting. They have existed for years — for example those mad-libs-like articles about stock prices and sports scores. They are becoming more pervasive and harder to avoid.

As a teacher, I know we eventually need to offer “with and without AI” writing instruction. But I don’t yet know how. The first homework assignment for my computing and society class is to compare the failures of the Boeing 737 Max to Nancy Leveson’s analysis of failures of the Therac 25 radiation therapy machine. It’s a nice assignment because the parallels are striking and the comparison brings out the significance of the issues. On its first try on the assignment, Chat GPT got a B+. My pedagogical goal for this homework is for the students to think about failures of technology in their sociotechnical context, and also to help them practice writing. Will the assignment still meet either of those goals? What’s my new approach? If you have an answer, leave me a comment.

--

--

Amy Bruckman

I do research on social media, including online collaboration, social movements, and online moderation and harassment.