A Paradigm Shift: From Generative AI to Smart Research Tools

Amy Bruckman
3 min readMay 29, 2024

--

“Why is his dose the same as mine?”, I asked the pediatrician. My seven-year-old son had just been prescribed his first asthma inhaler, and I was surprised it was the same medication, strength and number of puffs as mine. He’s so tiny — how could it be the same?

The doctor reassured me the dose was right. He was running late and needed to go. I persisted — yes, but why is it the same? He paused a second and said, “His lungs are smaller, so the amount of medication he gets is less.” He rushed to the next patient. I grinned — that’s such a cool answer! The dose is always just right — self adjusting by lung volume!

A critical duty of thoughtful people is to ask why, and how do you know that? The more we are surrounded with streams of information of dubious quality, the more we need to always ask for justification — how do you know? The theory of “virtue ethics” says you should choose values which are important to you, and strive to embody them. Extending that to how we deal with information, the theory of “virtue epistemology” says we should learn good practices surrounding knowledge, and strive to follow them. Insisting on reliable sources is a central epistemic virtue.

Every time an AI gives me an answer with no explanation, I think of that doctor — “the dose is right,” he said, starting to leave the room. Can you please slow down and explain? I can almost hear the AI saying, “Don’t you worry your pretty little head about it, ma’am — this is right.” I don’t want to take your word for it — I want to know how you know. I want references, and assessments of the quality of those references. That is a critical problem with generative AI today.

AI products like OpenAI’s ChatGPT and Google’s Search Generative Experience (SGE) hand you answers without support. In the kaleidoscopic storm of information and misinformation we dwell in today, being a critical consumer of information is not only prudent — it’s morally required.

That’s not to say that AI isn’t useful — it could be. But I want to suggest a paradigm shift. What responsible people need is not a patronizing “this is the answer” AI expert, but rather the world’s most amazing research tool. Here’s my challenge: create a “smart” research tool that summarizes what we know and what we don’t, with visible markers of level of confidence for each point. With references, and references for the references. And credibility markers for each source. And user tests for what designs lead people to choose more well-supported sources.

This is a hard technical problem. The way current large language models work can’t easily be adapted to do this. But on the other hand, no one is even really trying. Can you imagine if all that research and development labor, or even just a slice of that labor, went to developing something useful to epistemically responsible people?

What would an AI-powered “smart” research tool look like? Can you envision how we could make it work, technically? Leave me a comment!

Post-script: What I’m suggesting is related to the concept of “explainable AI,” but not the same. Explainable AI asks AI to provide an answer and be able to show its work. A smart research tool assumes the human is undertaking an investigation, and provides support for that process.

--

--

Amy Bruckman
Amy Bruckman

Written by Amy Bruckman

I do research on social media, including online collaboration, social movements, and online moderation and harassment.

Responses (3)