Generative AI is Unclean

Amy Bruckman
3 min readJun 28, 2024

--

“Interesting — the news just said if you’re dehydrated, the best thing to drink is skim milk,” I told my scientist friends, moderators of the subreddit r/science. They laughed at me. One said, “You mean big butter and their seven-man study?” Another followed up, “Apparently it is fairly weak evidence. One study with 11 people, one with 7, and one with 72.” I felt silly — I should know better than to believe things on TV news without verification.

A key moral duty of responsible people is to consider the source of every piece of information. Regrettably, you can’t always trust even major “reputable” journalistic outlets, like the one who told me to drink skim milk. We lack adequate tools to alert people that a piece of information is questionable. I don’t know how people who don’t hang around with scientist friends all day even begin to navigate it.

It’s increasingly hard to discern that advice like “drink milk in the heat” may be unsupported or false. Thoughtful people need to take steps to question sources, and remove the most unreliable ones from the information they take in. In our increasingly polluted information space, “consider your sources, and make sure not to believe crazy nonsense” is the prime directive of modern life.

Which brings me to the role of generative AI. Our key ethical responsibility as intelligent beings is to be critical of the source of each piece of information, and generative AI makes the problem worse. Output of ChatGPT or the AI-generated summary that now precedes a list of links on some search engines just gives you an answer with no provenance. With an answer that is a list of links, you can at least try to decide which sources seem reputable. The AI-generated answer seems to be saying, “Don’t worry your pretty little head about it, just accept this as true.” It was already hard to decide what to believe, and generative AI makes it impossible.

Being responsible about what you believe is an ethical issue — moral people are careful. The theory of “virtue ethics” argues that we all need to decide what sort of person we want to be. We need to choose which virtues we aspire to embody, and living up to our own standards is a lifelong journey. The theory of “virtue epistemology” says that being a responsible knower means embracing “epistemic virtues” — like checking your sources. We need to identify what practices are needed to be a responsible knower, and work to follow them as carefully as we can.

Our information space was already festering before the advent of generative AI. It’s hard to know not to assume that “drink milk when it’s hot out” is true. It’s hard to know what to believe about anything lately. Generative AI makes an already dire situation worse.

Generative AI is increasingly providing us with unsourced information, and I would like to move our information space to change in the other direction: to provide sources on everything. And sources for the sources. If we could redirect a tenth of the brainpower and labor directed towards generative AI to creating a “smart search engine” that names and rates all sources for credibility, then people might have a shot at being responsible knowers.

Some religions of the world declare certain foods to be “unclean” — eating them disturbs your spiritual status. Being a responsible knower is a moral duty, and using generative AI disturbs your ethical status. For that reason, I declare generative AI to be unclean.

--

--

Amy Bruckman
Amy Bruckman

Written by Amy Bruckman

I do research on social media, including online collaboration, social movements, and online moderation and harassment.

Responses (1)