Why Let Corporations Decide What is True or Important?
It’s not good news that Facebook is turning off fact checking — but the system was already fundamentally flawed. Commercial platforms make decisions based on business imperatives — what will make more money? It was never smart to trust them with important decisions like, “What content should I consume in my limited available time?” What if you could hire an independent organization that understands your values to annotate and filter what you see?
Platforms need to remove speech that is illegal (and deciding what is legal often isn’t easy). Further, governments need to continually evolve laws about what content is legal to adapt them to new technologies. For example, we all agree that child porn where a child was used to make it should be illegal — but what about AI-generated sexual imagery involving minors? I’m glad I don’t have to make those decisions. Policy makers do need to decide, and continually revisit those decisions.
Once we have a set of content that is legal, who should decide what you see? Do you really trust a company trying to make money from advertising to do the right thing? That was an abstract idea in the past, but it’s increasingly concrete and painful as we watch people like Mark Zuckerberg and Elon Musk change the shape of our information landscape based on their personal whims and agendas.
We need independent fact-checking and filtering organizations, and I want to send everything through them before I see it. Facebook until recently was hiring independent fact-checking organizations; however, the corporation got to give the fact checkers their parameters for what is acceptable. I want to hire the fact checker, and give them my parameters. We need multiple, competing organizations (both for profit and not-for-profit) that can tailor decisions to different worldviews and values. I envision an “information concierge” that can provide me with information on the provenance of information I see and what is known about its credibility. Additionally, my new service needs to help me find the most relevant things I should pay attention to, given the amount of time I have available and my interests and priorities.
Lastly, we all need to be willing to pay for information concierges. These services are not going to magically appear without funding. We need inexpensive versions and free services funded by donations. I hope free versions are high quality, or we’ll end up with quality information being a privilege of the rich. But we also need rich people and organizations being willing to pay to get information that isn’t garbage. Pay both for content generation (journalism, research) and filtering (finding the good stuff in a sewage stream of AI-generated nonsense and deliberate partisan manipulation).
Personal information feeds can make the problem of echo chambers worse, but I hope that you could ask a well-designed service to keep your perspective broad: please show me “the best of the other side” — content different from my normal fair that is worth seeing and thinking about.
We often think of the problems of blocking offensive content and selecting desired content as separate, but to me they are inextricable: this is what I want to see and not see. Of course, you’ll need to teach your concierge what you like and don’t like, and that process is complex. After I ask my concierge to block hate speech, the next thing I want is to ask it to remove AI-generated anything, and short videos about “person helps trapped animal.”
It is a step backwards that Facebook removed fact checking, but that fundamental model was never fixable. I hope we are moving closer to having the motivation to fix the system for real.
(Edited for clarity — thanks to Phil Resnick for feedback.)