Watching American politicians and judges grapple with the social and political dangers of the Internet is a bit like watching my cats chase a laser pointer. They are very enthusiastic about the hunt and follow every zig and zag with ostentatious ferocity, but whenever they approach the target it becomes painfully clear that they are misunderstanding the essence of the problem (or, you know, at the paw).
Putting aside the fact that left and right disagree about exactly what the dangers of social media are: the left generally argues that companies like Facebook and Twitter are not doing enough to stamp out misinformation, extremism and hatred on their platforms, while the right insists tech companies go so overboard in their substantive decisions that they suppress conservative political views.
Both sides have put forward — and in some cases passed — state and federal rules that force companies to change their practices. But the guiding ideas of lawmakers on both sides are variously unworkable, unconstitutional, irrelevant and unserious, many of which betray a profound ignorance of how the Internet actually works. To see why, look no further than the ugly digital trail left by the man accused of killing 10 people in a racist mass shooting in Buffalo, New York, last week. As I’ll explain, the defendant’s online actions and what he can do about it are major complications for both Republican and Democrat theories about fixing the Internet.
Let’s start with the Republicans. Last year, Florida and Texas governors signed laws prohibiting social media companies from “censoring” users, and Republican lawmakers in several other states are pushing for similar ideas. Texas and Florida laws were put on hold by US district court judges who said they could be unconstitutional, but this month a US appeals court reinstated the Texas rule without explanation; Trade associations in the tech industry have appealed to the Supreme Court to overturn that decision.
I could spend this entire column cataloging all the ways these laws are terrible. As the US court ruled, they appear to be violating the tech companies’ own First Amendment rights to host or not host certain content. The laws can give rise to many frivolous lawsuits from people who feel they have been treated unkindly by tech companies. Both laws are erratic and only apply to sites that reach a certain arbitrary threshold of users – 100 million in Florida, 50 million in Texas. Florida law even includes an exception for companies that run a theme park. (The law was signed when Florida Republicans were friends with Disney, now they’re trying to undo the Disney exemption.)
And the laws are dangerously too broad. While Florida and Texas governors Ron DeSantis and Greg Abbott say they want to protect conservative positions from liberal tech executives, the laws of the law seem to prohibit tech companies from removing or degrading all kinds of content that has nothing. to do with electoral politics. Groups that oppose the laws say tech companies shouldn’t be able to remove posts promoting suicide, animal cruelty, non-obscene nudity, and more that most users just don’t want to see when they open Facebook in the morning.
Also hate speech. The Buffalo suspect reportedly used Google Docs to post a lengthy manifesto promoting his ideology and explaining his rationale for the attack. Over a period of months, he posted thousands of lines of comments on Discord documenting his preparation for the shooting.
Texas law allows tech companies to remove content that “directly incites criminal activity or consists of specific threats of violence.” Over the past few days, I’ve been scouring through much of the digital records of the shooting, and it’s clear that some of its content met this threshold – but there was a lot that was ugly but didn’t directly incite violence. Under Republican rules, would platforms have the right to remove or refuse to amplify these ramblings? As crazy as the “great replacement” theory is, could Facebook delete posts about it, or should it pay as much attention to it as it does to any other political position?
In recent days, Democrats may have focused on the message that Republican legislation could lead to the spread of hate speech online. Instead, some saw the Buffalo attack as an opportunity to push through their own misguided attempts to legislate online speech.
Rep. Debbie Wasserman Schultz, D-Fla., called for a review of Section 230 of the Communications Decency Act, the law that keeps technology companies free from harm resulting from content posted by users. It’s an example of what I mean by non-serious and irrelevant: As I’ve argued before, Republicans and Democrats seem to be repealing this law as if it were a panacea to fix the Internet.
It isn’t, because many substantive decisions made by tech companies are protected by the First Amendment. But many legal scholars say the repeal of Section 230 would have horribly chilling effects, intimidating platforms into removing much controversial content just to avoid lawsuits.
Meanwhile, Senator Tim Kaine, D-Va., tweeted to denounce the spread of racist theories such as the “Big Tech” big replacement.
“Who has filled his head with this poison?” asked Kaine.
Big Tech wouldn’t be my first answer. In fact, it is unclear what role “Big Tech” played in the radicalization of the shooting; his manifesto is more of a product of smaller tech than big ones – specifically the free messaging site 4chan, where he says he got his racist ideology and from which he has taken a lot of memes. And why should Big Tech be blamed when the country’s hottest news host and several Republican lawmakers openly flirt with a great replacement ideology?
America is in the middle
While Democrats and Republicans have opposing goals for moderating online content — one side wants more rules, the other wants less — both sides are advocating the same basic mechanism for fixing the Internet: They want to empower judges, government agencies, and other officials. to decide what tech companies and their users can and cannot do online.
There are far fewer hasty legislative ideas to try first — for example, by imposing more transparency on social media companies so we can better understand how and to what extent they affect culture. Legislation introduced last year by Sens. Chris Coons, Amy Klobuchar and Rob Portman would require social networks to provide certain researchers with data that could shed light on social networks’ content decisions and their effects. This could allow outside researchers to determine, for example, whether platforms apply their rules consistently across the political spectrum, or how the companies’ algorithms promote or downplay disinformation, extremist content, and other toxic things online.
Of course, I don’t like that a handful of tech companies have so much control over what happens in society. But opening the door to states or the federal government controlling online discourse is a much worse fate that we must try to avoid.