The past few years have seen a steep rise in calls for censorship of free speech under the pretext of protecting us from misinformation. There is no viable alternative to letting citizens judge the truth for themselves, but better platforms could provide them with the tools to do so.
Calls for government-enforced censorship have come from both sides of the aisle and all parts of the globe. The EU started enforcing social media censorship with itsDigital Services Act, both Brazil and the EU have threatened X for failing to suppress unfavorable political voices,a United States Supreme Court rulingallowed the government to push tech companies to take down misinformation, Mark Zuckerburgexpressed regretfor giving in to exactly such pressure from the White House during the pandemic, andTim Walz claimed“there’s no guarantee to free speech on misinformation.”
Misinformation online is a real problem, but misinformation itself is not new, and it’s not clear that people are any more susceptible to falsehoods than they used to be. Twenty years ago the Iraq War was justified by claims of mass destruction now widely discredited. During the “Satanic Panic” of the 1980s, an investigation of over 12,000 reportsfailed to substantiate a single satanic cultabusing children. In the 1950s,McCarthy launched a Red Scareby claiming there were hundreds of known communists in the State Department, with no evidence to support his charges. Not too long ago, we were hanging witches, a practice thatpersists to this day.
Much of what’s newly dangerous about misinformation today is not the spread of false information itself. It’s the ability of bad actors — empowered by AI and pretending to be ordinary human users — to deliberately promote misinformation. Hordes of coordinated fake or incentivized accounts create the illusion of consensus and make fringe ideas appear mainstream. Popular social media platforms today are closed ecosystems, making it difficult to assess the reputation of sources or the provenance of claims. We’re limited to the information the platforms choose to measure and expose — followers, likes, and “verified” status. As AI becomes increasingly capable, hyper-realistic synthetic media undermine our ability to trust any raw content, be it audio, video, images, screenshots, documents or whatever we’d typically consume to evidence claims.
Politicians themselves are no more trustworthy than the information they seek to censor. Public trust in government isnear historic lows. Many of the most aggressive censorship efforts have targeted information that later proved to be true, while government-backed narratives have repeatedly been discredited. The same intelligence apparatusproactively warning usabout this election’s disinformationsuppressedandmislabelledthe Hunter Biden laptop story “Russian disinformation” the last time around. During the pandemic, legitimate scientific debate about COVID’s origins and public health measures wassilenced, while officials promoted claims about masks, transmission and vaccines theylater had to reverse. Both Elon Musk’s “Twitter Files” and Mark Zuckerberg’srecent admissionsof regret exposed the scale of government pressure on social platforms to suppress specific voices and viewpoints — often targeting legitimate speech rather than actual misinformation. Our leaders have proven themselves dangerously unfit to be the arbiters of truth.
The problem we face is a lack of trust. Citizens have lost faith in institutions, traditional media and politicians. Content platforms — Google, Facebook, YouTube, TikTok, X and more — are constantly accused of political bias in one direction or another. Even if such platforms managed to moderate content with complete impartiality, it wouldn’t matter — their opacity will always breed conspiracy and invite claims of bias and shadowbanning.
Fortunately, blockchains are. Instead of requiring faith in centralized authorities, they provide open, verifiable systems that anyone can inspect. Every account has a transparent history and quantifiable reputation, every piece of content can be traced to its source, every edit is permanently recorded, and no central authority can be pressured to manipulate results or selectively enforce rules. In the run-up to the US election, it’s no coincidence thatPolymarket— a blockchain-based, transparent and verifiable prediction market —emerged as a go-to election forecastwhile the electorate is losing faith in pollsters. Transparency and verifiability enable a shared ground of truth from which we can attempt to rebuild social trust.
Blockchain enables powerful new forms of verification. Tools likeWorldCoindemonstrate how users can prove they’re unique humans, and similar technology can verify concrete attributes like residence, citizenship or professional credentials. Zero-knowledge proofs might allow us to verify these attributes without revealing the underlying personal data. Such technologies could reveal meaningful information about the individuals and crowds participating in online discourse — whether they’re human, where they’re from and what credentials they hold — while preserving users’ privacy.
For example, users seeking medical advice might filter to verified MDs, or ignore non-citizens in domestic policy debates. Wartime disinformation might be ignored by limiting results to verified members of various armed forces. Politicians might focus their feeds and surveys on verified constituents to avoid being pressured by the illusion of outrage by well-organized fringes or foreign actors. AI-powered analysis could uncover authentic patterns across verifiable groups, revealing how perspectives vary between experts and the public, citizens and global observers, or any other meaningful segments.
Cryptographic verification extends beyond blockchain transactions. TheContent Authenticity Initiative— a coalition of over 200 members founded by Adobe, The New York Times and Twitter — is developing protocols that act like a digital notary for cameras and content creation. These protocols cryptographically sign digital content at the moment of capture,embedding secure metadataabout who created it, what device captured it and how it’s been modified. This combination of cryptographic signatures and provenance metadata enables verifiable authenticity that anyone can inspect. A video, for example, might contain cryptographic proof that it was taken on a given user’s device, in a specific location and at a specific time.
Finally, open protocols enable third parties to build tools users need to evaluate truth and control their online experience. Protocols likeFarcasteralready allow users to choose theirpreferred interfacesand moderation approaches. Third parties can build reputation systems, fact-checking services, content filters and analysis tools — all operating on the same verified data. Rather than being locked into black box algorithms and centralized moderation, users get real tools to assess information and real choices in how they do so.
Trust is an increasingly scarce asset. As faith in our institutions erodes, as AI-generated content floods our feeds and as centralized platforms become increasingly suspect, users will demand verifiability and transparency from their content. New systems will be built on cryptographic proof rather than institutional authority — where content authenticity can be verified, participant identity established and a thriving ecosystem of third-party tooling and analysis supports our search for the truth. The technology for this trustless future already exists — adoption will follow necessity.
Disclaimer.This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.