The US Election and Social Media's Reckoning
The 2020 US election exposed the fundamental tension between platform scale and platform responsibility, and there are no easy answers
The 2020 US presidential election happened two days ago, and as I write this, several states are still counting ballots. The outcome is uncertain. What is not uncertain is that this election cycle has been the most significant stress test that social media platforms have ever faced, and the results of that test are deeply troubling.
I work in technology. I build platforms. I understand the engineering challenges of operating systems at scale. And I am watching the platforms I understand technically grapple with problems that are fundamentally not technical, using tools and frameworks designed for a different class of challenge entirely.
The Platform Problem
The core tension is straightforward to state and nearly impossible to resolve: social media platforms are simultaneously communications infrastructure and editorial systems, and they refuse to fully commit to being either one.
If they are infrastructure, like a telephone network or an email system, then they carry content without judgment. They do not label, fact-check, or suppress. They provide the pipes and let users decide what flows through them. This is the Section 230 model, and it worked reasonably well when social media was a place where people shared photos of their lunch.
If they are editorial systems, like newspapers or television networks, then they have a responsibility to curate, verify, and contextualize the content they distribute. They make judgments about what is true, what is misleading, and what is harmful. This requires editorial expertise, clear standards, and accountability.
The platforms have tried to occupy a middle ground, and the election has shown how untenable that position is. Twitter is appending labels to tweets from the President that contain disputed claims about election integrity. Facebook is adding context banners to posts about the election. YouTube is surfacing authoritative sources in search results. But these interventions are inconsistent, reactive, and easily circumvented.
The Technical Architecture of Misinformation
From a systems perspective, social media platforms are optimized for engagement. The recommendation algorithms, the notification systems, the news feed ranking: every technical decision is designed to maximize time-on-platform and interaction frequency. Content that generates strong emotional responses, outrage, fear, tribal identification, performs well by these metrics.
This is not a bug. It is the core product mechanic. And it creates a structural incentive to amplify exactly the kind of content that undermines informed democratic participation: sensationalized claims, conspiracy theories, and partisan outrage.
The algorithms do not understand truth. They understand engagement. A false claim that generates a thousand angry shares is, from the algorithm's perspective, more valuable than a carefully sourced analysis that generates a hundred thoughtful reads. The system is working as designed. It is just that the design optimizes for the wrong objective function.
Changing this requires changing the fundamental business model, not just adding labels to problematic content. And no publicly traded company whose revenue depends on advertising is going to voluntarily reduce engagement, because engagement is what advertisers pay for.
The Content Moderation Challenge
I have been thinking about content moderation as an engineering problem, and the scale is staggering.
Facebook has roughly 2.7 billion monthly active users. Twitter has 330 million. YouTube receives 500 hours of video per minute. The volume of content flowing through these systems is beyond the capacity of any human review team to monitor, even with tens of thousands of moderators.
This means content moderation is, by necessity, primarily automated. Machine learning classifiers scan text, images, and video for policy violations. The classifiers are imperfect. They miss context, satire, nuance, and cultural specificity. They flag content incorrectly. They fail at novel forms of deception.
And they face an adversarial problem: users who want to spread misinformation actively adapt to evade detection. They use coded language, misspellings, images instead of text, private groups, and ephemeral content. It is a cat-and-mouse game where the mouse is motivated, creative, and distributed across millions of users.
The engineering parallels to security are instructive. In security, we learned decades ago that perimeter defense is insufficient against motivated attackers. You need defense in depth, monitoring, incident response, and an acceptance that some attacks will succeed. Content moderation is arriving at the same conclusion, but the "attacks" are not exploitation of technical vulnerabilities. They are exploitation of human psychology at scale.
The Epistemic Crisis
The deeper problem is not the platforms themselves but what they have done to the shared information environment. We no longer have a common set of facts that citizens agree on. The fragmentation of media, accelerated by social media's filter bubbles and recommendation engines, has created parallel information ecosystems where people consume entirely different versions of reality.
This is not a problem that Twitter labels or Facebook context banners can solve. It is a structural consequence of how digital information ecosystems work. When every person's feed is algorithmically personalized to maximize their engagement, and engagement correlates with emotional intensity rather than accuracy, the result is epistemic fragmentation at a civilizational scale.
I do not have a solution to this. I am not sure anyone does. But I think it is worth being honest about the magnitude of the problem. We built global communication platforms without thinking carefully about what happens when global communication platforms are used to undermine the shared epistemic foundations that democratic governance requires.
What the Platforms Are Doing
To their credit, the major platforms prepared for this election with more rigor than any previous cycle.
Twitter implemented a policy of labeling tweets with premature claims of election victory and tweets with unsubstantiated claims about election integrity. They reduced the virality of labeled tweets by disabling one-click retweets and requiring users to add their own commentary. They applied these policies to tweets from the President.
Facebook established a "political ads moratorium" in the final week before the election, banned new political ads, and is showing notifications to users who interact with election-related content directing them to authoritative information sources. They also committed to reducing the distribution of content that their fact-checking partners flag as false.
YouTube updated its recommendation algorithm to prioritize authoritative news sources in election-related searches and added information panels with official election results from the Associated Press.
These are meaningful steps. They are also insufficient. The interventions happen downstream of the core dynamic: the algorithmic amplification of emotionally charged content. Labeling a tweet after it has been seen by millions of people is damage mitigation, not prevention.
Where This Goes
I think we are at the beginning of a reckoning that will take years to play out. Several trajectories seem plausible:
Regulatory intervention is coming, regardless of who wins this election. Both parties have grievances with the platforms, albeit different ones. Some form of platform regulation, whether it involves modifications to Section 230, antitrust action, or new transparency requirements for algorithmic recommendation, seems inevitable.
Platform fragmentation is already happening. Users who feel that mainstream platforms are too restrictive are migrating to alternative platforms with weaker moderation. This fragments the information ecosystem further but may reduce the concentration of power in a handful of companies.
Business model evolution may eventually happen. Subscription-based social media, where the user is the customer rather than the product, would change the incentive structure around content amplification. But the advertising model is enormously profitable, and the network effects that keep users on existing platforms are powerful.
An Engineer's Discomfort
I build systems for a living. I believe in the power of technology to solve problems. But the problems we are seeing with social media and democratic governance are not engineering problems. They are social, political, and philosophical problems that manifest through technology.
No algorithm will solve the tension between free expression and preventing the spread of harmful disinformation. No content moderation system will scale to the volume of human communication while respecting context and nuance. No platform design will simultaneously maximize engagement and promote civic health.
These are tradeoffs, not optimizations. And tradeoffs require values-based decisions that are outside the scope of what engineering teams are equipped or authorized to make.
The election will be decided, eventually. The ballot counting will finish. A president will be inaugurated. But the information environment that shaped this election will persist, and the platforms that built it will continue to operate on the same fundamental logic that produced the chaos we are witnessing.
We built these systems. We need to reckon with what they have become.