Can We Finally Acknowledge That Facebook Is Bad?
DISLIKE IF YOU AGREE
·Updated:
·

Today, Facebook COO Sheryl Sandberg sat in front of of the Senate Intelligence Committee, and assured them that the website was doing everything in its power to prevent foreign interference in the upcoming midterm elections. But it's hard to classify Sandberg's testimony as a pivotal moment in the way information is shared online.

It's a saga that seemingly has no beginning and no end. Donald Trump won the 2016 presidential election, and somehow fake news on Facebook had a part in it? Then Trump-associated political research firm Cambridge Analytica stole Facebook data on some 50 million voters? And what about the Russians the FBI has accused of creating fake accounts to influence the 2016 election, something Facebook is only now rectifying. Or Facebook banning Alex Jones, where does that fit in?

It's not surprising that for a platform that hosts some 2 billion users, it's hard to keep track of just how Facebook is messing up these days. But a few pieces published over the past week have examined the website with a depth and scale, that, together, finally offer something approaching moral clarity on the nature of Facebook. Here's an honest attempt to piece them together.

The issue that Facebook faces is a fundamental one. If you hope to become — and arguably are very close to! — a single global platform for communication, how do you prevent exploitation of that? Last week, Motherboard's Jason Koebler and Joseph Cox published an exclusive and exhaustive look into Facebook's moderation practices: "The Impossible Job: Inside Facebook's Struggle to Moderate Two Billion People". In it Koebler and Cox present a refreshingly broad look at just how Facebook is failing — Facebook believes that it can successfully moderate; experts, even those within Facebook itself, believe otherwise:

Facebook's constant churn of content moderation-related problems come in many different flavors: There are failures of policy, failures of messaging, and failures to predict the darkest impulses of human nature. Compromises are made to accommodate Facebook's business model. There are technological shortcomings, there are honest mistakes that are endlessly magnified and never forgotten, and there are also bad-faith attacks by sensationalist politicians and partisan media.

In the US, the potential implications of Facebook becoming what it wants to be — an all encompassing platform for virtually all human communication — are seen as just that: potentials. Mark Zuckerberg will claim responsibility, and Sheryl Sandberg will go before Congress to assure lawmakers that they're doing everything they can, but these are largely seen as preventative measures. America is aware that Facebook could present a problem to how information is distributed in the near future. The reality, however, is that it's already happening elsewhere.

On Tuesday, BuzzFeed's Davey Alba published a lengthy look into how Facebook has rotted notions of truth within the Philippines, and how President Rodrigo Duterte continues to exploit a nation who gets their information exclusively through Facebook.

If you want to know what happens to a country that has opened itself entirely to Facebook, look to the Philippines. What happened there — what continues to happen there — is both an origin story for the weaponization of social media and a peek at its dystopian future. It's a society where, increasingly, the truth no longer matters, propaganda is ubiquitous, and lives are wrecked and people die as a result — half a world away from the Silicon Valley engineers who'd promised to connect their world.

It, of course, doesn't begin and end in the Philippines. Another BuzzFeed report from last week ties state-sponsored violence against the Muslim Rohingya in Myanmar, and a week before that a major study found a direct link between usage of Facebook and rates of violence against immigrants.

In the year 2018, it's no longer a question of if Facebook is influencing and promoting campaigns of hate, misinformation and violence, but rather where and to what extent. So what can be done?

Today, Sheryl Sandberg sat before the Senate Intelligence Committee and promised that Facebook has recognized the problem, and is doing everything it can to prevent interference in US elections, though lawmakers are hinting at some form of regulation for the tech industry.

Government regulation is one method, but experts Koebler spoke to in his piece on Facebook moderation plainly see it as an impossible task. It's not that Facebook doesn't recognize the problems it's creating, nor that it isn't at least trying to make efforts to correct them, but that it believes that they can ultimately be solved. 

The government imposing regulations on how internet companies function doesn't escape this. Like Facebook tirelessly trying to come up with foolproof rules and workflows to stemming the tide of misinformation, hate and violence, a constantly-updated set of government-imposed rules and regulations only enshrines the notion that a handful of internet websites should house the entirety of human discourse. This is the root of the problem.

There is an alternative. On Tuesday, Tim Wu, an internet policy expert most famous for coining and championing "net neutrality, spoke with the Verge's Nilay Patel about a radical new way to approach Facebook's moderation problem: Break it up. Which sure sounds a lot like government regulation, but Wu's argument places a lot of faith in the free market to correct itself. Broadly speaking, the tech industry is dominated by a handful of big companies — Facebook, Google, Apple, Twitter, Amazon — and potential competitors are merely trying to figure out ways to fit within these existing giants.

"Google and Facebook didn't start that way. There's a really profound difference in the kind of innovation you see when people are afraid of disturbing the mothership versus what you do when you sense a real opportunity. No one's willing to fund [profound innovation] because you're not going to displace Facebook or Google. So we go around the edges somewhere and try and find some cute little thing that doesn't bother anybody too much and get bought out."

Breaking up Facebook wouldn't solve the inherent issues of trying to moderate a user base of 2 billion people, but it would encourage space for competitors to finally attempt to pull users away from the platform for greener pastures.

And maybe there's no better time for trust busting the tech giants, especially Facebook. A study released today by the Pew Research Center found that 42% of Americans have taken extended breaks from Facebook and three quarters of US Facebook users have taken some action — by either leaving, adjusting privacy settings or deleting the mobile app — to reduce the impact that the website has in their lives.

Talk to anyone, and you'll probably get this vague notion that everyone feels that Facebook is bad. But this recent Pew study is perhaps the first real confirmation of this shift in public perception. In a media narrative that at times seems everywhere but unknowable, it's the first real sign of the beginning of the end.

<p>Steve Rousseau is the Features Editor at Digg.&nbsp;</p>

Want more stories like this?

Every day we send an email with the top stories from Digg.

Subscribe