X Close

In defence of Facebook There are problems with Zuckerberg's monopolistic intentions, but breaking up his company could do more harm than good

Facebook founder and CEO Mark Zuckerberg. Credit: Saul Loeb / AFP / Getty

Facebook founder and CEO Mark Zuckerberg. Credit: Saul Loeb / AFP / Getty


January 14, 2020   7 mins

I was at Harvard with Mark Zuckerberg, while he created Facebook. We lived in neighbouring dorms, and probably passed on the sidewalk and ate together in the same dining hall. But I never met him.

I did know Divya Narendra, who sued Zuckerberg for allegedly stealing his social network idea. And I’m pretty sure the Winklevoss twins, the other two partners who lost out to The Facebook, cut in front of me in the photo booth line at our 10 year college reunion. I also met the actor who played Mark Zuckerberg in The Social Network while he was shooting across the street from my grad school apartment. I came home one day to find him — Jesse Eisenberg — sitting on my front step. We exchanged awkward nods as I went inside.

So my connections with Zuckerberg are indirect. But even so, I feel a sympathy for him and for the social media revolution he started. I was Facebook user number 2795, and have stuck with the site as it has ballooned and evolved over the last 15 years.  It has amassed immense power and influence, which have led to it making clear errors with major consequences. But its success has also led to it becoming a target for anyone looking for a symbol to attack in the name of battling corporate greed and malfeasance.

Facebook and Zuckerberg have, as a result, come under increasing scrutiny. EU regulators, acting on various antitrust and data collection concerns, have launched a number of investigations. These have been stepped up by Margrethe Vestager, the EU’s “digital czar”, who has been given expanded powers to aggressively go after big tech companies.

But the greatest pressure against Facebook has come from its home country. After admitting to various violations of its users’ privacy, the company reached a key settlement with the United States Federal Trade Commission, which included a record $5 billion fine. Calls to dismantle the tech giants continually ring out — expect them to get louder over the course of this election year. The most prominent voice is that of Massachusetts Senator Elizabeth Warren, whose plan to “break up big tech” includes undoing Facebook’s acquisitions of the mega-popular photo-sharing site Instagram and messaging service WhatsApp.

Zuckerberg told employees in a closed-door meeting (that was later leaked to The Verge) that it would “suck” for Warren to win the presidency. In response, Warren tweeted:

With approximately 2.4 billion active users, or roughly one third of the world’s population, Facebook is a massively powerful entity. Just under 8% of the world’s web traffic goes through Facebook, second only to the 12% that goes through Google. It’s no wonder she’s targeting it. But while quite large, these figures are significantly smaller than the 70% of all web traffic that Senator’s campaign page incorrectly attributed to the two websites.

It has also snapped up potential rivals, such as Instagram and WhatsApp, and is actively seeking ways to stop the growth of competitors such as Chinese short form video site Tik Tok (a virtual meme-creating machine that has become immensely popular around the world, especially among a teenage audience ). Facebook even attempted to launch its own cryptocurrency, the Libra, sparking regulatory concerns across Europe and the US that have largely sidelined that project for now.

Monopolistic intentions aside, of particularly great concern is the impact it had on on the 2016 US presidential election. And what might happen this year. Although Donald Trump and his supporters are loathe to admit it, Facebook’s sway was probably large enough to account for Trump’s razor-thin electoral victory over Hillary Clinton.

The New York Times recently published a story based on an internal post written by Facebook executive Andrew Bosworth that extensively addressed the impact the website had on Trump’s election (Bosworth later made his post public).

While admitting that “we were late on data security, misinformation, and foreign interference”, Bosworth argues that the overall impact of Russian Facebook ads and Cambridge Analytica data-mining were negligible to the actual election results — a conclusion that is questionable given that a shift of only 80,000 votes in key swing states would have swung the overall election. It has become clear, however, that the much larger impact Facebook had on the election was not the result of nefarious methods but simply the fact the Trump campaign used Facebook more effectively than the Democrats.

In Bosworth’s own words:

“So was Facebook responsible for Donald Trump getting elected? I think the answer is yes, but not for the reasons anyone thinks. He didn’t get elected because of Russia or misinformation or Cambridge Analytica. He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period.”

In deciding on the response to Facebook’s role in the 2016 election, we must separate the fact that Facebook allowed unethical practices that undermined a free and fair vote, from the fact that Facebook’s influence contributed to a victory for Donald Trump. Faulting the company for doing damage to democracy is legitimate; faulting it for helping a candidate that you oppose win is not. The latter is the type of thinking that dictators use to justify internet censorship and social media blackouts, and should not factor into serious debates about how to best address the real concerns posed by Facebook or big tech.

Bosworth’s post says that, despite being a Hillary Clinton supporter in 2016, and having a strong personal incentive to change the rules of the website so that the Trump campaign won’t be as successful in 2020, he and the other decision-makers at Facebook were putting principles ahead of politics and not changing the rules to try to achieve a preferable political outcome. Government regulators in the United States and Europe should exercise similar restraint.

A company as large and powerful as Facebook will always have enormous potential to do good or harm to entire nations, and even shape the world economically, politically and socially. And with great power, every Spider-Man fan knows, comes great responsibility. The problem with placing that responsibility in the hands of a corporation is that Zuckerberg, his executives and Facebook’s shareholders are in the business of making money, not serving society. The solution to this potential problem, however, is not attack or demonisation, but good old-fashioned regulation.

Zuckerberg himself has called for greater government regulation over Facebook and other internet companies, and while this is obviously a ploy to prevent some more radical solution from being imposed on the company, it’s also the right move. Our governments’ jobs are to serve the public interest. And to the extent that the government can adopt regulations that protect and serve users and customers, while also allowing for entrepreneurs and innovators like Zuckerberg and the people at Facebook to both enjoy the fruits of their labour and have the freedom to continue innovating, then this should be the first option.

Those regulations may be difficult to manage, and may require extensive cooperation between governments. But there are existing models, such as the United States’ Federal Communications Commission, that can be adapted to the new challenges of digital media. Any new regulations must also have teeth in order to be effective. For example, the recent FTC settlement with Facebook not only fined the company billions but, more importantly, also set up a system of oversight that exposes Zuckerberg and other executives to personal liability should they further violate privacy regulations.

Now oversight is only as powerful as the political will and technical capacity to enforce it, which means that regulators must remain vigilant about holding tech giants to account. If the Zuckerbergs of the tech industry display an unyielding pattern of cheating or evading the rules, then more drastic measures should be taken. But the idea of breaking up the tech giants being a foregone conclusion, rather than an option on the table, seems to serve politics more than the public interest.

For example, undoing Facebook’s acquisitions of Instagram and WhatsApp might well decrease some of the company’s influence over users and over the social media market, but it could also prevent the company from implementing “end-to-end encryption” that could improve user privacy.

This benefit to users is not, of course, Facebook’s motivation here – the company sees this platform integration as an opportunity to increase user’s time on their apps and help their targeted ad strategy. But if the company’s profit motive leads it to make changes that benefit consumers, then regulators (who are currently contemplating a move to block this integration, in case they later decide to break apart the companies) would better serve the public by guiding the process in a way to maximise consumer benefit and minimise potential harm. Limits, for example, could be put on the extent to which Facebook will internally share user data across the platforms or restricting what the company or its third party clients are allowed to do with that data.

Again, such vigilance would be technically complicated, but the benefits to user privacy would present a social good to consumers, particularly as internet privacy remains a growing concern in the face of governments (both democratic and authoritarian) which are becoming increasingly skilled at surveillance and cyber-policing of their populations.

Facebook detractors have blamed the site for being coopted by authoritarian governments and other anti-democratic forces. Its experiment at providing free internet services in the Philippines most probably facilitated the winning campaign of Rodrigo Duterte. The platform has been used by a number of dictatorial regimes to spread pro-government propaganda and even hate speech in countries like Myanmar. All the more reason to break up Facebook and limit its activities, critics argue.

But any vacuum left in the digital universe is likely to be filled with alternatives that only exacerbate the anti-democratic aspects of social media. It was recently revealed, for instance, that ToTok, a popular new WhatsApp-like messaging application recently launched in the Middle East, is literally a spy tool for the United Arab Emirates government.

The government of Vietnam, meanwhile, has had a hand in launching hundreds of local social media sites to compete with Facebook, figuring that these alternatives would be easier to control. Most notably, China has developed several alternatives to the American-based social media sites – Renren, Weibo, WeChat and more – that are required to conform to strict censorship laws and sometimes actively participate in mass surveillance on behalf of the Chinese government.

There’s a reason why Facebook has faced long-term blocks in China and a number of the world’s other ultra-oppressive regimes such as Iran and North Korea. Though the site can be manipulated, Facebook’s size and independence also grants it a level of autonomy and influence that makes it resistant to being fully coopted by authoritarians and threatening to dictators. Since 2011, when Facebook and other media sites were instrumental in the Arab Spring and subsequent mass movements, the site has temporarily been banned during periods of protests in countries such as Egypt, Iran, Pakistan, Sri Lanka, Sudan, Syria, Vietnam and Zimbabwe. If authoritarians fear the power of a website this much, that should give us pause to think about the social benefit it provides and incentivise our appointed and elected officials to find solutions that do not throw out the baby with the bathwater.

Facebook was created to satisfy a student demand for across-campus connection — a demand that was not being met by our university’s administration. After it launched in 2004, I waited a couple of weeks before signing up. At the time, I remembered Zuckerberg’s previous project, a “who’s hotter?” student photo comparison site called Facemash that had been shut down by the university for using student’s photos without their permission (and being kind of creepy). By the time I finally set up a Facebook account, I figured that it was not a scam or an illicit site, and if it was, at least plenty of people would be scammed alongside me.

Nearly 16 years later, both the appeal and the concerns I had back then over Facebook have been magnified to a global scale. And just as Harvard’s administration reined in the worst of undergraduate Mark Zuckerberg’s impulses, in turn setting the stage for him to launch his revolutionary social media site, it’s now up to our governments to responsibly monitor the ever-evolving landscape of Facebook and big tech.

They must do so in a way that eliminates the excesses of the tech giants but allows their social benefits — which range from confronting authoritarian regimes to simply connecting friends — to flourish. Let’s hope they’re up to the challenge.


Dr Christopher Rhodes is a Lecturer at Boston University’s College of General Studies.
PReligions

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments