Bosworth’s post says that, despite being a Hillary Clinton supporter in 2016, and having a strong personal incentive to change the rules of the website so that the Trump campaign won’t be as successful in 2020, he and the other decision-makers at Facebook were putting principles ahead of politics and not changing the rules to try to achieve a preferable political outcome. Government regulators in the United States and Europe should exercise similar restraint.
A company as large and powerful as Facebook will always have enormous potential to do good or harm to entire nations, and even shape the world economically, politically and socially. And with great power, every Spider-Man fan knows, comes great responsibility. The problem with placing that responsibility in the hands of a corporation is that Zuckerberg, his executives and Facebook’s shareholders are in the business of making money, not serving society. The solution to this potential problem, however, is not attack or demonisation, but good old-fashioned regulation.
Zuckerberg himself has called for greater government regulation over Facebook and other internet companies, and while this is obviously a ploy to prevent some more radical solution from being imposed on the company, it’s also the right move. Our governments’ jobs are to serve the public interest. And to the extent that the government can adopt regulations that protect and serve users and customers, while also allowing for entrepreneurs and innovators like Zuckerberg and the people at Facebook to both enjoy the fruits of their labour and have the freedom to continue innovating, then this should be the first option.
Those regulations may be difficult to manage, and may require extensive cooperation between governments. But there are existing models, such as the United States’ Federal Communications Commission, that can be adapted to the new challenges of digital media. Any new regulations must also have teeth in order to be effective. For example, the recent FTC settlement with Facebook not only fined the company billions but, more importantly, also set up a system of oversight that exposes Zuckerberg and other executives to personal liability should they further violate privacy regulations.
Now oversight is only as powerful as the political will and technical capacity to enforce it, which means that regulators must remain vigilant about holding tech giants to account. If the Zuckerbergs of the tech industry display an unyielding pattern of cheating or evading the rules, then more drastic measures should be taken. But the idea of breaking up the tech giants being a foregone conclusion, rather than an option on the table, seems to serve politics more than the public interest.
For example, undoing Facebook’s acquisitions of Instagram and WhatsApp might well decrease some of the company’s influence over users and over the social media market, but it could also prevent the company from implementing “end-to-end encryption” that could improve user privacy.
This benefit to users is not, of course, Facebook’s motivation here – the company sees this platform integration as an opportunity to increase user’s time on their apps and help their targeted ad strategy. But if the company’s profit motive leads it to make changes that benefit consumers, then regulators (who are currently contemplating a move to block this integration, in case they later decide to break apart the companies) would better serve the public by guiding the process in a way to maximise consumer benefit and minimise potential harm. Limits, for example, could be put on the extent to which Facebook will internally share user data across the platforms or restricting what the company or its third party clients are allowed to do with that data.
Again, such vigilance would be technically complicated, but the benefits to user privacy would present a social good to consumers, particularly as internet privacy remains a growing concern in the face of governments (both democratic and authoritarian) which are becoming increasingly skilled at surveillance and cyber-policing of their populations.
Facebook detractors have blamed the site for being coopted by authoritarian governments and other anti-democratic forces. Its experiment at providing free internet services in the Philippines most probably facilitated the winning campaign of Rodrigo Duterte. The platform has been used by a number of dictatorial regimes to spread pro-government propaganda and even hate speech in countries like Myanmar. All the more reason to break up Facebook and limit its activities, critics argue.
But any vacuum left in the digital universe is likely to be filled with alternatives that only exacerbate the anti-democratic aspects of social media. It was recently revealed, for instance, that ToTok, a popular new WhatsApp-like messaging application recently launched in the Middle East, is literally a spy tool for the United Arab Emirates government.
The government of Vietnam, meanwhile, has had a hand in launching hundreds of local social media sites to compete with Facebook, figuring that these alternatives would be easier to control. Most notably, China has developed several alternatives to the American-based social media sites – Renren, Weibo, WeChat and more – that are required to conform to strict censorship laws and sometimes actively participate in mass surveillance on behalf of the Chinese government.
There’s a reason why Facebook has faced long-term blocks in China and a number of the world’s other ultra-oppressive regimes such as Iran and North Korea. Though the site can be manipulated, Facebook’s size and independence also grants it a level of autonomy and influence that makes it resistant to being fully coopted by authoritarians and threatening to dictators. Since 2011, when Facebook and other media sites were instrumental in the Arab Spring and subsequent mass movements, the site has temporarily been banned during periods of protests in countries such as Egypt, Iran, Pakistan, Sri Lanka, Sudan, Syria, Vietnam and Zimbabwe. If authoritarians fear the power of a website this much, that should give us pause to think about the social benefit it provides and incentivise our appointed and elected officials to find solutions that do not throw out the baby with the bathwater.
Facebook was created to satisfy a student demand for across-campus connection — a demand that was not being met by our university’s administration. After it launched in 2004, I waited a couple of weeks before signing up. At the time, I remembered Zuckerberg’s previous project, a “who’s hotter?” student photo comparison site called Facemash that had been shut down by the university for using student’s photos without their permission (and being kind of creepy). By the time I finally set up a Facebook account, I figured that it was not a scam or an illicit site, and if it was, at least plenty of people would be scammed alongside me.
Nearly 16 years later, both the appeal and the concerns I had back then over Facebook have been magnified to a global scale. And just as Harvard’s administration reined in the worst of undergraduate Mark Zuckerberg’s impulses, in turn setting the stage for him to launch his revolutionary social media site, it’s now up to our governments to responsibly monitor the ever-evolving landscape of Facebook and big tech.
They must do so in a way that eliminates the excesses of the tech giants but allows their social benefits — which range from confronting authoritarian regimes to simply connecting friends — to flourish. Let’s hope they’re up to the challenge.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe