What Buffalo’s Shooting Video Teaches Us About Online ‘Free Speech’
Last weekend, Twitter and other major platforms once again scrambled to remove posts and videos that were legal under the First Amendment but violated their policies. In this case, the videos showed a gunman, allegedly an 18-year-old white supremacist, massacring 10 people at a grocery store in a predominantly black neighborhood of Buffalo. And the messages included the suspect’s racist screed, for which he appears to have intended the massacre to serve as publicity.
Buffalo’s video shoot highlights the issues involved in what too often looks like an abstract debate about online discourse and freedom of expression.
Only 22 saw the Buffalo filming live. Millions of people have seen it since.
Musk’s past statements would seem to imply that, if he was in charge, Twitter would have let the videos and the manifesto circulate, at least in the United States. After all, hate speech and depictions of graphic violence are not against the law here.
But Musk remained silent on set, even as he continued to tweet prolifically about other Twitter-related topics. Asked by The Washington Post via email if he thought Twitter was wrong to remove the videos of the shooting, he did not respond.
The role of social media in the Buffalo shooting was not trivial. While the attack took place in the physical world, it was planned online, influenced by ideas that spread online, broadcast live online, and motivated in part by the shooter’s apparent belief that his words and his actions would eventually be shared by millions of people online. In this regard, it was modeled after the 2019 massacre in Christchurch, New Zealand, which the author streamed live on Facebook.
In Buffalo, the shooter apparently chose to livestream his attack on Twitch rather than Facebook, in part because he knew Facebook had responded to Christchurch by improving its ability to quickly detect and stop violent livestreams. It turns out that Twitch also moved quickly to remove his video, but not fast enough to stop someone from saving it, uploading it elsewhere, and then sharing links to it on Facebook, Twitter, and many other sites. (Twitch is owned by Amazon, whose founder Jeff Bezos owns The Washington Post.)
Video of the Buffalo shooting and the suspect’s writings remained viewable online despite efforts by Facebook, Twitter and other major platforms. to remove it, in part thanks to smaller niche sites with looser content moderation. But those efforts have dramatically reduced the number of people facing graphic violence and bigoted propaganda in their feeds.
Elon Musk’s free speech agenda poses global security risks
In their early years, Facebook, YouTube, and especially Twitter idealistically portrayed themselves as guardians of free speech around the world. This idealism seemed to dovetail nicely with their business model, allowing a relatively small cadre of engineers and designers to build systems capable of hosting large amounts of content without also requiring large numbers of humans to review what users were posting.
Over the years, however, Facebook, Twitter, YouTube and others have learned the hard way that without rules or enforcement, their products would not only harbor the worst in humanity, but elevate it. systematically, thanks to algorithms and human social networks. a dynamic that tends to prioritize the most shocking and eye-catching ideas and images.
The danger is not just moral: without moderation, users’ feeds would constantly expose them to messages they find offensive, insulting or just plain rude, and many would eventually leave. So the need for tech companies to dedicate both artificial intelligence software and teams of human reviewers to detecting and removing everything from pornography to scams to graphic violence has become obvious.
In the opinion of Musk and a growing number of conservatives, however, the platforms have gone too far. They see a liberal bias in both the rules tech companies have established and how they enforce them. While these critics tend to support certain categories of content moderation, including efforts to prevent spam and bots, they resent those that appear to have a political dimension, such as policies against misinformation and hate speech. .
One response has been for conservatives to create their own social networks. Upstarts such as former President Donald Trump’s Rumble, Parler, Gab and Truth Social have emerged as alternatives to the big platforms, promising “freedom of speech” to users. In practice, all soon found the lack of moderation to be disastrous, and many adopted rules much like the ones they were trying to rebel against. So far, none have taken precedence over the mainstream.
Today, conservatives and libertarians are pushing to force their visions of unfettered speech on established platforms — whether by regulating them or, in Musk’s case, trying to buy them off.
Elon Musk wants “free speech” on Twitter. But for whom?
A law that took effect in Texas last week prohibits the biggest social platforms from discriminating based on a user’s “point of view”, and other states are considering similar laws. The Texas attorney general’s office did not respond to a request for comment on whether Texans who posted the Buffalo shooter’s propaganda could sue tech companies under the law for removing it.
Meanwhile, Musk said he believes “freedom of speech” on social media is “what is within the law” and that moderating legal speech would be “against the will of the people”.
Of course, the law is different in each country. In Russia, complying with the law would amount to prohibiting users from calling the war in Ukraine a war — a policy far more restrictive than Twitter’s current stance. In fact, Twitter has been widely blocked in Russia for refusing to comply with government censorship demands.
In the United States, however, the First Amendment protects a wide range of speech from government censorship. Constitutional scholars say this not only includes many types of spam, pornography and disinformation, but also hate speech and depictions of graphic violence. Which means it’s almost certainly legal to upload the grisly video of the Buffalo shooter, and likely his violently racist manifesto as well, depending on the context.
that we should Posting it is a different matter — “an ethical question, not a legal one,” said Jameel Jaffer, director of Columbia University’s Knight First Amendment Institute. The same goes for whether the platforms, which are private corporations with their own First Amendment rights, should allow it on their platforms.
The future of social media could be decided by the Supreme Court
For tech companies, an ethical argument against releasing the shooter’s video and manifesto is that many users will no doubt find it upsetting or offensive. A stronger might be that, as the shooter himself acknowledged, the ability to spread his message far and wide was part of the motivation for the attack in the first place. So for the rigs that host it, they risk not only magnifying the damage to Buffalo, but also tacitly inciting the next mass shooter.
It’s unclear whether Musk himself has fully considered the implications of his own philosophy. He seemed adamant in his opinion that Twitter should allow most speech unless it violates the law. But soon after, criticizing the site’s permanent suspension of Trump, Musk said “wrong or bad” tweets should be “removed or made invisible.” He did not specify how this would square with his free speech absolutism.
The reality is that Big Tech companies, liberals, Musk, and conservatives all generally support free speech. They just disagree on where to draw the lines of what is acceptable on large public forums.