After a chemical attack was reported in Douma, Syria in April, many of us weren’t sure how much to trust the narrative we were hearing from the New York Times and US government officials. They reported that Assad’s government was behind the attack, while Assad and his allies in Russia and Iran claimed it was a hoax. Multiple narratives circulated that seemed credible, and I found myself feeling disoriented in my search for what really happened. It wasn’t important enough to me to spend hours investigating, so I shrugged it off.
If you’d asked me then or now, I’d defer to official US view. I do this because of my identity as a good American citizen, because I generally trust New York Times reporting, and because the third-party, open-source intelligence service Bellingcat confirmed it.
But was I certain? No. Having witnessed government and media lies about other matters like WMDs, my level of confidence wasn’t high. And there was no other easy way to confirm what actually happened. I had to trust someone. I consciously chose the sources to trust. Was I a fool? Maybe. Or perhaps, like everyone, I’m just trying to navigate today’s epistemological challenges.
How do we know what is true?
When I try to convey what I mean by today’s epistemic challenges, I often refer to the cover of the box of cereal below: How would you spell the name of the cereal?
Most of us defer to the way it is spelled on the cereal box, which we can all read: FROOT LOOPS. What if I told you that this is wrong? That it’s supposed to be FRUIT LOOPS with “fruit” spelled with an ‘i’. My proof: Look at the dictionary. Clearly the cereal is referring to fruit, so that makes sense, right?
If we agree on the same point of reference, it is easy to gain consensus on the cereal’s spelling. But if I insist on an entirely different but plausible point of reference, our shared sense of truth is shattered. We no longer have a shared epistemology. Look at the cereal box. No, look at the dictionary. No, look at the cereal box.
The way we make sense of the world is in flux. It’s awkward and messy.
If you’re like me, it’s been at least twenty years since you’ve used an encyclopedia. And you’ve probably used Wikipedia or something like it within the last few days. Either you went to the site directly or it showed up in search.
Social media is creating an epistemic shift. We are moving from an Encyclopedia Brittanica world to a Wikipedia world. We are moving from expert-driven, analog modes of understanding to digitally-driven, crowdsourced, collective intelligence.
Many people call this a post-truth era. Maybe it’s more accurate to call it a post-trust era.
To navigate today’s epistemic crisis, I believe we should solve for trust before attempting to solve for truth. And we should do so while pushing for new systems of collective intelligence.
Our most scarce resource is trust.
Trust in media and political institutions is at an all-time low. Everyone knows this, but no one seems to take it seriously at a policy level. We’re in the midst of a full-fledged “trust crisis” and it is devastating for civil society.
Our media and political establishments are still living in an Encyclopedia Brittanica world. They demand the public trust them because, well, they are the anointed experts. Yet the flattening of the information environment has exposed their flaws, mistakes, and lies. And past mistakes, like lies about WMDs that led to a disastrous and expensive war, have never been fully accounted for. The same experts who pushed WMDs are still given airtime.
Think about it: The entire country used to get the majority of its news from one or two trusted new anchors and three or four TV channels. Then we moved to an era of cable TV and 24-hour news channels. Then the Internet and social media came along, and now we have an endless menu of information options.
Gen Z depends more on YouTube, Facebook, and Instagram for news than traditional sources. Only 8 percent read a national newspaper and 10 percent watch cable news.
Shattering trust is easy. Building trust is hard.
Sowing chaos and discord is easy compared to building trust and confidence. Our adversaries know this. They see how easy it is to exploit our trust crisis to cause chaos and destabilization.
Given today’s disorienting information environment, I can understand why it is tempting to surrender to chaos, or even worship it. You can see this among some bad actors. For example, here is Alexandr Dugin on the metaphysics of chaos:
The architecture of the Post-Modern world is completely fragmented, perverse and confused. It is a kind of the labyrinth without exit, folded and twisted as the Moebius trip. The Logos that was the guarantee of strictness of the order serves here to grant the curvature and crookedness, being used to preserve the impassability of the ontologically border with nothing from the eventual trespassers. So the only way to save us, to save humanity and culture from this snare is to make a step beyond the logocentric culture, addressing to the Chaos. We could not restore the Logos and the order addressing to them because they bear in themselves the reason of their eternal destruction. In other words, to save exclusive Logos we should make an appeal to the alternative inclusive instance that is Chaos…. Logos needs a savior for itself, it couldn’t save itself, it needs something opposite to itself to be restored in the critical situation of Post-Modernity.
I’m not a fan of Dugin. I find many of his views dangerous and kooky, but there’s something interesting in the relationship between chaos, complexity, and knowledge.
Can we use the obliteration of trust to lead us to new systems of truth?
Here is where I differ from Dugin: I believe we should embrace complexity rather than chaos. I believe we can use complexity to our advantage through collective intelligence.
We need ways to develop trust and grow reputation between people & entities in a decentralized way.
There may or may not be absolute truths (I believe there are), but as a practical matter: Truth is what people trust is true or what they are forced to believe is true.
On a practical level, there’s only trust, force, or nihilism. I share Hayek’s view:
“Probably it is true enough that the great majority are rarely capable of thinking independently, that on most questions they accept views which they find ready-made, and that they will be equally content if born or coaxed into one set of beliefs or another. In any society freedom of thought will probably be of direct significance only for a small minority. But this does not mean that anyone is competent, or ought to have power, to select those to whom this freedom is to be reserved. It certainly does not justify the presumption of any group of people to claim the right to determine what people ought to think or believe.”
I don’t want force or nihilism. I want trust, and the only way I see to advance beyond our current systems for truth is to develop new, decentralized forms of collective intelligence. This requires solving for trust. And solving for trust requires solving for reputation, feedback, and market mechanisms.
It’s analogous to what the holochain is trying to accomplish, as it “inverts the ontological dependency between agents and data.”
There is no absolute, nor objective truth — as in true outside of the context of an agent claiming or believing or denying the validity of an expression. Objectivity could be approached by inter-subjectivity — when and where needed — not as a central aspect of the architecture like with blockchains. Every agent is free to act as they deem useful and is sovereign to make sense of the world (=everything everybody else is saying and doing) as it makes sense for their unique perspective and use-case. If that entails trusting other parties’ output in the process it is a conscious decision and not implied by the architecture.
Can’t say I understand all of this, but the logic seems similar.
We can chip away at this by seeking simple ways to build trust through collective intelligence.
All of this may seem galaxy-brained, but I believe there are ways to chip away at the problem of trust. We don’t need to boil the ocean to advance solutions.
We could brainstorm dozens of simple product ideas that would advance this. Most would fail, but some might stick. Here are three such ideas, offhand:
- A rating and review site for individual journalists (This would have all kinds of flaws, but one can imagine how it might serve to provide an element of trust, reputation, and feedback to the marketplace of information).
- A site where people can compare news coverage of events to actual, crowdsourced footage of an event (as happened organically during the 2016 election).
- A marketplace app where you can fund informational tasks to people all over the world. Pay the guy who lives in the town next to Douma to go take pictures, for example.
These are random ideas I haven’t fully thought through. My point is, there are ways to chip away at these problems. We approach it practically as product managers and entrepreneurs, not just in abstract philosophical terms.
Hayek was right.
Trust is the atomic unit of any collective intelligence. And right now, we really need both trust and collective intelligence.
Hayek was right:
“It is because every individual knows little and, in particular, because we rarely know which of us knows best best that we trust the independent and competitive efforts of many to induce the emergence of what we shall want when we see it.”
It’s time we look at collective intelligence as a way to rebuild trust and advance human knowledge, accuracy, and understanding.