• Opinion Paper
  • Published: 05 June 2021

Facebook’s ethical failures are not accidental; they are part of the business model

  • David Lauer   ORCID: orcid.org/0000-0002-0003-4521 1  

AI and Ethics volume  1 ,  pages 395–403 ( 2021 ) Cite this article

76k Accesses

16 Citations

705 Altmetric

Metrics details

Working on a manuscript?

Facebook’s stated mission is “to give people the power to build community and bring the world closer together.” But a deeper look at their business model suggests that it is far more profitable to drive us apart. By creating “filter bubbles”—social media algorithms designed to increase engagement and, consequently, create echo chambers where the most inflammatory content achieves the greatest visibility—Facebook profits from the proliferation of extremism, bullying, hate speech, disinformation, conspiracy theory, and rhetorical violence. Facebook’s problem is not a technology problem. It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content within an echo chamber, the algorithmic outcomes would be very different.

Facebook’s failure to check political extremism, [ 15 ] willful disinformation, [ 39 ] and conspiracy theory [ 43 ] has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence. So it naturally raised more than a few eyebrows when Facebook’s Chief AI Scientist Yann LeCun tweeted his concern [ 32 ] over the role of right-wing personalities in downplaying the severity of the COVID-19 pandemic. Critics were quick to point out [ 29 ] that Facebook has profited handsomely from exactly this brand of disinformation. Consistent with Facebook’s recent history on such matters, LeCun was both defiant and unconvincing.

In response to a frenzy of hostile tweets, LeCun made the following four claims:

Facebook does not cause polarization or so-called “filter bubbles” and that “most serious studies do not show this.”

Critics [ 30 ] who argue that Facebook is profiting from the spread of misinformation—are “factually wrong.” Footnote 1

Facebook uses AI-based technology to filter out [ 33 ]:

Hate speech;

Calls to violence;

Bullying; and

Disinformation that endangers public safety or the integrity of the democratic process.

Facebook is not an “arbiter of political truth” and that having Facebook “arbitrate political truth would raise serious questions about anyone’s idea of ethics and liberal democracy.”

Absent from the claims above is acknowledgement that the company’s profitability depends substantially upon the polarization LeCun insists does not exist.

Facebook has had a profound impact on our access to ideas, information, and one another. It has unprecedented global reach, and in many markets serves as a de-facto monopolist. The influence it has over individual and global affairs is unique in human history. Mr. LeCun has been at Facebook since December 2013, first as Director of AI Research and then as Chief AI Scientist. He has played a leading role in shaping Facebook’s technology and approach. Mr. LeCun’s problematic claims demand closer examination. What follows, therefore, is a response to these claims which will clearly demonstrate that Facebook:

Elevates disinformation campaigns and conspiracy theories from the extremist fringes into the mainstream, fostering, among other effects, the resurgent anti-vaccination movement, broad-based questioning of basic public health measures in response to COVID-19, and the proliferation of the Big Lie of 2020—that the presidential election was stolen through voter fraud [ 16 ];

Empowers bullies of every size, from cyber-bullying in schools, to dictators who use the platform to spread disinformation, censor their critics, perpetuate violence, and instigate genocide;

Defrauds both advertisers and newsrooms, systematically and globally, with falsified video engagement and user activity statistics;

Reflects an apparent political agenda espoused by a small core of corporate leaders, who actively impede or overrule the adoption of good governance;

Brandishes its monopolistic power to preserve a social media landscape absent meaningful regulatory oversight, privacy protections, safety measures, or corporate citizenship; and

Disrupts intellectual and civil discourse, at scale and by design.

1 I deleted my Facebook account

I deleted my account years ago for the reasons noted above, and a number of far more personal reasons. So when LeCun reached out to me, demanding evidence for my claims regarding Facebook’s improprieties, it was via Twitter. What proof did I have that Facebook creates filter bubbles that drive polarization?

In anticipation of my response, he offered the claims highlighted above. As evidence of his claims, he directed my attention to a single research paper [ 23 ] that, on closer inspection, does not appear at all to reinforce his case.

The entire exchange also suggests that senior leadership at Facebook still suffers from a massive blindspot regarding the harm that its platform causes—that they continue to “move fast and break things” without regard for the global impact of their behavior.

LeCun’s comments confirm the concerns that many of us have held for a long time: Facebook has declined to resolve its systemic problems, choosing instead to paper over these deep philosophical flaws with advanced, though insufficient, technological solutions. Even when Facebook takes occasion to announce its triumphs in the ethical use of AI, such as its excellent work [ 8 ] detecting suicidal tendencies, its advancements pale in comparison to the inherent problems written into its algorithms.

This is because, fundamentally, their problem is not a failure of technology, nor a shortcoming in their AI filters. Facebook’s problem is its business model. Facebook makes superficial technology changes, but at its core, profits chiefly from engagement and virality. Study after study has found that “lies spread faster than the truth,” [ 47 ] “conspiracy theories spread through a more decentralized network,” [ 41 ] and that “politically extreme sources tend to generate more interactions from users.” Footnote 2 Facebook knows that the most efficient way to maximize profitability is to build algorithms that create filter bubbles and spread viral misinformation.

This is not a fringe belief or controversial opinion. This is a reality acknowledged even by those who have lived inside of Facebook’s leadership structure. As the former director of monetization for Facebook, Tim Kendall explained in his Congressional testimony, “social media services that I, and others have built, have torn people apart with alarming speed and intensity. At the very least we have eroded our collective understanding—at worst, I fear we are pushing ourselves to the brink of a civil war.” [ 38 ]

2 Facebook’s black box

To effectively study behavior on Facebook, we must be able to study Facebook’s algorithms and AI models. Therein lies the first problem. The data and transparency to do so are simply not there. Facebook does not practice transparency—they do not make comprehensive data available on their recommendation and filtering algorithms, or their other implementations of AI. One organization attempting to study the spread of misinformation, NYU’s Cybersecurity for Democracy, explains, “[o]ur findings are limited by the lack of data provided by Facebook…. Without greater transparency and access to data, such research questions are out of reach.” Footnote 3

Facebook’s algorithms and AI models are proprietary, and they are intentionally hidden from us. While this is normal for many companies, no other company has 2.85 billion monthly active users. Any platform that touches so many lives must be studied so that we can truly understand its impact. Yet Facebook does not make the kind of data available that is needed for robust study of the platform.

Facebook would likely counter this, and point to their partnership with Harvard’s Institute for Quantitative Social Science (Social Science One) as evidence that they are making data available to researchers [ 19 ]. While this partnership is one step in the right direction, there are several problems with this model:

The data are extremely limited. At the moment it consists solely of web page addresses that have been shared on Facebook for 18 months from 2017 to 2019.

Researchers have to apply for access to the data through Social Science One, which acts as a gatekeeper of the data.

If approved, researchers have to execute an agreement directly with Facebook.

This is not an open, scientific process. It is, rather, a process that empowers administrators to cherry-pick research projects that favor their perspective. If Facebook was serious about facilitating academic research, they would provide far greater access to, availability of, and insight into the data. There are legitimate privacy concerns around releasing data, but there are far better ways to address those concerns while fostering open, vibrant research.

3 Does Facebook cause polarization?

LeCun cited a single study as evidence that Facebook does not cause polarization. But do the findings of this study support Mr. LeCun’s claims?

The study concludes that “polarization has increased the most among the demographic groups least likely to use the Internet and social media.” The study does not, however, actually measure this type of polarization directly. Its primary data-gathering instrument—a survey on polarization—did not ask whether respondents were on the Internet or if they used social media. Instead, the study estimates whether an individual respondent is likely to be on the Internet based on an index of demographic factors which suggest “predicted” Internet use. As explained in the study, “the main predictor [they] focus on is age” [ 23 ]. Age is estimated to be negatively correlated with social media usage. Therefore, since older people are also shown to be more politically polarized, LeCun takes this as evidence that social media use does not cause polarization.

This assumption of causality is flawed. The study does not point to a causal relationship between these demographic factors and social media use. It simply says that these demographic factors drive polarization. Whether these factors have a correlational or causative relationship with the Internet and social media use is complete conjecture. The author of the study himself caveats any such conclusions, noting that “[t]hese findings do not rule out any effect of the internet or social media on political polarization.” [ 5 ].

Not only is LeCun’s assumption flawed, it is directly refuted by a recent Pew Research study [ 3 ] that found an overwhelmingly high percentage of US adults age 65 + are on Facebook (50%), the most of any social network. If anything, older age is actually more clearly correlated with Facebook use relative to other social networks.

Moreover, in 2020, the MIS Quarterly journal published a study by Steven L. Johnson, et al. that explored this problem and found that the “more time someone spends on Facebook, the more polarized their online news consumption becomes. This evidence suggests Facebook indeed serves as an echo chamber especially for its conservative users” [ 24 ].

Allcott, et al. also explores this question in “The Welfare Effects of Social Media” in November, 2019, beginning with a review of other studies confirming a relationship between social media use, well-being and political polarization [ 1 ]:

More recent discussion has focused on an array of possible negative impacts. At the individual level, many have pointed to negative correlations between intensive social media use and both subjective well-being and mental health. Adverse outcomes such as suicide and depression appear to have risen sharply over the same period that the use of smartphones and social media has expanded. Alter (2018) and Newport (2019), along with other academics and prominent Silicon Valley executives in the “time well-spent” movement, argue that digital media devices and social media apps are harmful and addictive. At the broader social level, concern has focused particularly on a range of negative political externalities. Social media may create ideological “echo chambers” among like-minded friend groups, thereby increasing political polarization (Sunstein 2001, 2017; Settle 2018). Furthermore, social media are the primary channel through which misinformation spreads online (Allcott and Gentzkow 2017), and there is concern that coordinated disinformation campaigns can affect elections in the US and abroad.

Allcott’s 2019 study uses a randomized experiment in the run-up to the November 2018 midterm elections to examine how Facebook affects several individual and social welfare measures. They found that:

deactivating Facebook for the four weeks before the 2018 US midterm election (1) reduced online activity, while increasing offline activities such as watching TV alone and socializing with family and friends; (2) reduced both factual news knowledge and political polarization; (3) increased subjective well-being; and (4) caused a large persistent reduction in post-experiment Facebook use.

In other words, not using Facebook for a month made you happier and resulted in less future usage. In fact, they say that “deactivation significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” None of these findings would come as a surprise to anybody who works at Facebook.

“A former Facebook AI researcher” confirmed that they ran “‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization” [ 21 ]. Not only did Facebook know this, but they continued to design and build their recommendation algorithms to maximize user engagement, knowing that this meant optimizing for extremism and polarization. Footnote 4

Facebook understood what they were building according to Tim Kendall’s Congressional testimony in 2020. He explained that “we sought to mine as much attention as humanly possible and turn [sic] into historically unprecedented profits” [ 38 ]. He went on to explain that their inspiration was “Big Tobacco’s playbook … to make our offering addictive at the outset.” They quickly figured out that “extreme, incendiary content” directly translated into “unprecedented engagement—and profits.” He was the director of monetization for Facebook—few would have been better positioned to understand Facebook’s motivations, findings and strategy.

4 Engagement, filter bubbles, and executive compensation

The term “filter bubble” was coined by Eli Pariser who wrote a book with that title, exploring how social media algorithms are designed to increase engagement and create echo chambers where inflammatory posts are more likely to go viral. Filter bubbles are not just an algorithmic outcome; often we filter our own lives, surrounding ourselves with friends (online and offline) who are more likely to agree with our philosophical, religious and political views.

Social media platforms capitalize on our natural tendency toward filtered engagement. These platforms build algorithms, and structure executive compensation, [ 27 ] to maximize such engagement. By their very design, social media curation and recommendation algorithms are engineered to maximize engagement, and thus, are predisposed to create filter bubbles.

Facebook has long attracted criticism for its pursuit of growth at all costs. A recent profile of Facebook’s AI efforts details the difficulty of getting “buy-in or financial support when the work did not directly improve Facebook’s growth.” [ 21 ]. Andrew Bosworth, a Vice President at Facebook said in a 2016 memo that nothing matters but growth, and that “all the work we do in growth is justified” regardless of whether “it costs someone a life by exposing someone to bullies” or if “somebody dies in a terrorist attack coordinated on our tools” [ 31 ].

Bosworth and Zuckerberg went on to claim [ 36 ] that the shocking memo was merely an attempt at being provocative. Certainly, it succeeded in this aim. But what else could they really say? It’s not a great look. And it looks even worse when you consider that Facebook’s top brass really do get paid more when these things happen. The above-referenced report is based on interviews with multiple former product managers at Facebook, and shows that their executive compensation system is largely based around their most important metric–user engagement. This creates a perverse incentive. And clearly, by their own admission, Facebook will not allow a few casualties to get in the way of their executive compensation.

5 Is it incidental or intentional?

Yaël Eisenstat, a former CIA analyst who specialized in counter-extremism went on to work at Facebook out of concern that the social media platform was increasing radicalization and political polarization. She explained in a TED talk [ 13 ] that the current information ecosystem is manipulating its users, and that “social media companies like Facebook profit off of segmenting us and feeding us personalized content that both validates and exploits our biases. Their bottom line depends on provoking a strong emotion to keep us engaged, often incentivizing the most inflammatory and polarizing voices.” This emotional response results in more than just engagement—it results in addiction.

Eisenstat joined Facebook in 2018 and began to explore the issues which were most divisive on the social media platform. She began asking questions internally about what was causing this divisiveness. She found that “the largest social media companies are antithetical to the concept of reasoned discourse … Lies are more engaging online than truth, and salaciousness beats out wonky, fact-based reasoning in a world optimized for frictionless virality. As long as algorithms’ goals are to keep us engaged, they will continue to feed us the poison that plays to our worst instincts and human weaknesses.”

She equated Facebook’s algorithmic manipulation to the tactics that terrorist recruiters use on vulnerable youth. She offered Facebook a plan to combat political disinformation and voter suppression. She has claimed that the plan was rejected, and Eisenstat left after just six months.

As noted earlier, LeCun flatly denies [ 34 ] that Facebook creates filter bubbles that drive polarization. In sharp contrast, Eisenstat explains that such an outcome is a feature of their algorithm, not a bug. The Wall St. Journal reported that in 2018, senior executives at Facebook were informed of the following conclusions during an internal presentation [ 22 ]:

“Our algorithms exploit the human brain’s attraction to divisiveness… [and] if left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention and increase time on the platform.”

The platform aggravates polarization and tribal behavior.

Some proposed algorithmic changes would “disproportionately affect[] conservative users and publishers.”

Looking at data for Germany, an internal report found “64% of all extremist group joins are due to our recommendation tools … Our recommendation systems grow the problem.”

These are Facebook’s own words, and arguably, they provide the social media platform with an invaluable set of marketing prerogatives. They are reinforced by Tim Kendall’s testimony as discussed above.

“Most notably,” reported the WSJ, “the project forced Facebook to consider how it prioritized ‘user engagement’—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.” As noted in the section above, executive compensation was tied to “user engagement,” which meant product developers at Facebook were incentivized to design systems in this very way. Footnote 5

Mark Zuckerberg and Joel Kaplan reportedly [ 22 ] dismissed the conclusions from the 2018 presentation, calling efforts to bring greater civility to conversations on the social media platform “paternalistic.” Zuckerberg went on to say that he would “stand up against those who say that new types of communities forming on social media are dividing us.” Kaplan reportedly “killed efforts to build a classification system for hyperpolarized content.” Failing to address this has resulted in algorithms that, as Tim Kendall explained, “have brought out the worst in us. They have literally rewired our brains so that we are detached from reality and immersed in tribalism” [ 38 ].

Facebook would have us believe that it has made great strides in confronting these problems over just the last two years, as Mr. LeCun has claimed. But at present, the burden of proof is on Facebook to produce the full, raw data so that independent researchers can make a fair assessment of his claims.

6 The AI filter

According to LeCun’s tweets cited at the beginning of this paper, Facebook’s AI-powered filter cleanses the platform of:

Disinformation that endangers public safety or the integrity of the democratic process

These are his words, so we will refer to them even while the actual definitions of hate speech, calls to violence, and other terms are potentially controversial and open to debate.

These claims are provably false. While “AI” (along with some very large, manual curation operations in developing countries) may effectively filter some of this content, at Facebook’s scale, some is not enough.

Let’s examine the claims a little closer.

6.1 Does Facebook actually filter out hate speech?

An investigation by the UK-based counter-extremist organization ISD (Institute for Strategic Dialog) found that Facebook’s algorithm “actively promotes” Holocaust denial content [ 20 ]. The same organization, in another report, documents how Facebook’s “delays or mistakes in policy enforcement continue to enable hateful and harmful content to spread through paid targeted ads.” [ 17 ]. They go on to explain that “[e]ven when action is taken on violating ad content, such a response is often reactive and delayed, after hundreds, thousands, or potentially even millions of users have already been served those ads on their feeds.” Footnote 6

Zuckerberg admitted in April 2018 that hate speech in Myanmar was a problem, and pledged to act. Four months later, Reuters found more than “1000 examples of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook” [ 45 ]. As recently as June 2020 there were reports [ 7 ] of troll farms using Facebook to intimidate opponents of Rodrigo Duterte in the Philippines with death threats and hateful comments.

6.2 Does Facebook actually filter out calls to violence?

The Sri Lankan government had to block access to Facebook “amid a wave of violence against Muslims … after Facebook ignored years of calls from both the government and civil society groups to control ethnonationalist accounts that spread hate speech and incited violence.” [ 42 ] A report from the Center for Policy Alternatives in September 2014 detailed evidence of 20 hate groups in Sri Lanka, and informed Facebook. In March of 2018, Buzzfeed reported that “16 out of the 20 groups were still on Facebook”. Footnote 7

When former President Trump tweeted, in response to Black Lives Matters protests, when “the looting starts, the shooting starts,” the message was liked and shared hundreds of thousands of times across Facebook and Instagram, even as other social networks such as Twitter flagged the message for its explicit incitement of violence [ 48 ] and prevented it from being retweeted.

Facebook played a pivotal role in the planning of the January 6th insurrection in the US, providing an unchecked platform for proliferation of the Big Lie, radicalization around this lie, and coordinated organization around explicitly-stated plans to engage in violent confrontation at the nation’s capital on the outgoing president’s behalf. Facebook’s role in the deadly violence was far greater and more widespread than the role of Parler and the other fringe right-wing platforms that attracted so much attention in the aftermath of the attack [ 11 ].

6.3 Does Facebook actually filter out cyberbullying?

According to Enough Is Enough, a non-partisan, non-profit organization whose mission is “making the Internet safer for children and families,” the answer is a resounding no. According to their most recent cyberbullying statistics, [ 10 ] 47% of young people have been bullied online, and the two most prevalent platforms are Instagram at 42% and Facebook at 37%.

In fact, Facebook is failing to protect children on a global scale. According to a UNICEF poll of children in 30 countries, one in every three young people says that they have been victimized by cyberbullying. And one in five says the harassment and threat of actual violence caused them to skip school. According to the survey, conducted in concert with the UN Special Representative of the Secretary-General (SRSG) on Violence against Children, “almost three-quarters of young people also said social networks, including Facebook, Instagram, Snapchat and Twitter, are the most common place for online bullying” [ 49 ].

6.4 Does Facebook actually filter out “disinformation that endangers public safety or the integrity of the democratic process?”

To list the evidence contradicting this point would be exhausting. Below are just a few examples:

The Computational Propaganda Research Project found in their 2019 Global Inventory of Organized Social Media Manipulation that 70 countries had disinformation campaigns organized on social media in 2019, with Facebook as the top platform [ 6 ].

A Facebook whistleblower produced a 6600 word memo detailing case after case of Facebook “abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe.” [ 44 ]

Facebook is ground-zero for anti-vaccination and pandemic misinformation, with the 26-min conspiracy theory film “Plandemic” going viral on Facebook in April 2020 and garnering tens of millions of views. Facebook’s attempt to purge itself of anti-vaccination disinformation was easily thwarted when the groups guilty of proliferating this content removed the word “vaccine” from their names. In addition to undermining public health interests by spreading provably false content, these anti-vaccination groups have obscured meaningful discourse about the actual health concerns and risks that may or may not be connected to vaccinations. A paper from May 2020 attempts to map out the “multi-sided landscape of unprecedented intricacy that involves nearly 100 million individuals” [ 25 ] that are entangled with anti-vaccination clusters. That report predicts that such anti-vaccination views “will dominate in a decade” given their explosive growth and intertwining with undecided people. According to the Knight Foundation and Gallup, [ 26 ] 75% of Americans believe they “were exposed to misinformation about the election” on Facebook during the 2020 US presidential election. This is one of those rare issues on which Republicans (76%), Democrats (75%) and Independents (75%) agree–Facebook was the primary source for election misinformation.

If those AI filters are in fact working, they are not working very well.

All of this said, Facebook’s reliance on “AI filters” misses a critical point, which is that you cannot have AI ethics without ethics [ 30 ]. These problems cannot be solved with AI. These problems cannot be solved with checklists, incremental advances, marginal changes, or even state-of-the-art deep learning networks. These problems are caused by the company’s entire business model and mission. Bosworth’s provocative quotes above, along with Tim Kendall’s direct testimony demonstrate as much.

These are systemic issues, not technological ones. Yael Eisenstat put it best in her TED talk: “as long as the company continues to merely tinker around the margins of content policy and moderation, as opposed to considering how the entire machine is designed and monetized, they will never truly address how the platform is contributing to hatred, division and radicalization.”

7 Facebook does not want to be the arbiter of truth

We should probably take comfort in Facebook’s claim that it does not wish to be the “arbiter of political truth.” After all, Facebook has a troubled history with the truth. Their ad buying customers proved as much when Facebook was forced to pay $40 million to settle a lawsuit alleging that they had inflated “by up to 900 percent—the time it said users spent watching videos.” [ 4 ] While Facebook would neither admit nor deny the truth of this allegation, they did admit to the error in a 2016 statement [ 14 ].

This was not some innocuous lie that just cost a few firms some money either. As Slate explained in a 2018 article, “many [publications] laid off writers and editors and cut back on text stories to focus on producing short, snappy videos for people to watch in their Facebook feeds.” [ 40 ] People lost their livelihoods to this deception.

Is this an isolated incident? Or is fraud at Facebook systemic? Matt Stoller describes the contents of recently unsealed legal documents [ 12 ] in a lawsuit alleging Facebook has defrauded advertisers for years [ 46 ]:

The documents revealed that Facebook COO Sheryl Sandberg directly oversaw the alleged fraud for years. The scheme was simple. Facebook deceived advertisers by pretending that fake accounts represented real people, because ad buyers choose to spend on ad campaigns based on where they think their customers are. Former employees noted that the corporation did not care about the accuracy of numbers as long as the ad money was coming in. Facebook, they said, “did not give a shit.” The inflated statistics sometimes led to outlandish results. For instance, Facebook told advertisers that its services had a potential reach of 100 million 18–34-year-olds in the United States, even though there are only 76 million people in that demographic. After employees proposed a fix to make the numbers honest, the corporation rejected the idea, noting that the “revenue impact” for Facebook would be “significant.” One Facebook employee wrote, “My question lately is: how long can we get away with the reach overestimation?” According to these documents, Sandberg aggressively managed public communications over how to talk to advertisers about the inflated statistics, and Facebook is now fighting against her being interviewed by lawyers in a class action lawsuit alleging fraud.

Facebook’s embrace of deception extends from its ad-buying fraud to the content on its platforms. For instance:

Those who would “aid[] and abet[] the spread of climate misinformation” on Facebook benefit from “a giant loophole in its fact-checking program.” Evidently, Facebook gives its staff the power to overrule climate scientists by deeming climate disinformation “opinion.” [ 2 ].

The former managing editor of Snopes reported that Facebook was merely using the well-regarded fact-checking site for “crisis PR,” that they did not take fact checking seriously and would ignore concerns [ 35 ]. Snopes tried hard to push against the Myanmar disinformation campaign, amongst many other issues, but its concerns were ignored.

ProPublica recently reported [ 18 ] that Sheryl Sandberg silenced and censored a Kurdish militia group that “the Turkish government had targeted” in order to safeguard their revenue from Turkey.

Mark Zuckerberg and Joel Kaplan intervened [ 37 ] in April 2019 to keep Alex Jones on the platform, despite the right-wing conspiracy theorist’s lead role in spreading disinformation about the 2012 Sandy Hook elementary school shooting and the 2018 Parkland high school shooting.

Arguably, Facebook’s executive team has not only ceded responsibility as an “arbiter of truth,” but has also on several notable occasions, intervened to ensure the continued proliferation of disinformation.

8 How do we disengage?

Facebook’s business model is focused entirely on increasing growth and user engagement. Its algorithms are extremely effective at doing so. The steps Facebook has taken, such as building “AI filters” or partnering with independent fact checkers, are superficial and toothless. They cannot begin to untangle the systemic issues at the heart of this matter, because these issues are Facebook’s entire reason for being.

So what can be done? Certainly, criminality needs to be prosecuted. Executives should go to jail for fraud. Social media companies, and their organizational leaders, should face legal liability for the impact made by the content on their platforms. One effort to impose legal liability in the US is centered around reforming section 230 of the US Communications Decency Act. It, and similar laws around the world, should be reformed to create far more meaningful accountability and liability for the promotion of disinformation, violence, and extremism.

Most importantly, monopolies should be busted. Existing antitrust laws should be used to break up Facebook and restrict its future activities and acquisitions.

The matters outlined here have been brought to the attention of Facebook’s leadership in countless ways that are well documented and readily provable. But the changes required go well beyond effective leveraging of AI. At its heart, Facebook will not change because they do not want to, and are not incentivized to. Facebook must be regulated, and Facebook’s leadership structure must be dismantled.

It seems unlikely that politicians and regulators have the political will to do all of this, but there are some encouraging signs, especially regarding antitrust investigations [ 9 ] and lawsuits [ 28 ] in both the US and Europe. Still, this issue goes well beyond mere enforcement. Somehow we must shift the incentives for social media companies, who compete for, and monetize, our attention. Until we stop rewarding Facebook’s illicit behavior with engagement, it’s hard to see a way out of our current condition. These companies are building technology that is designed to draw us in with problematic content, addict us to outrage, and ultimately drive us apart. We no longer agree on shared facts or truths, a condition that is turning political adversaries into bitter enemies, that is transforming ideological difference into seething contempt. Rather than help us lead more fulfilling lives or find truth, Facebook is helping us to discover enemies among our fellow citizens, and bombarding us with reasons to hate them, all to the end of profitability. This path is unsustainable.

The only thing Facebook truly understands is money, and all of their money comes from engagement. If we disengage, they lose money. If we delete, they lose power. If we decline to be a part of their ecosystem, perhaps we can collectively return to a shared reality.

Facebook executives have, themselves, acknowledged that Facebook profits from the spread of misinformation: https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news .

Cybersecurity for Democracy. (March 3, 2021). “Far-right news sources on Facebook more engaging.” https://medium.com/cybersecurity-for-democracy/far-right-news-sources-on-facebook-more-engaging-e04a01efae90 .

Facebook claims to have since broadened the metrics it uses to calculate executive pay, but to what extent this might offset the prime directive of maximizing user engagement is unclear.

Allcot, H., et al.: “The Welfare Effects of Social Media.” (2019). https://web.stanford.edu/~gentzkow/research/facebook.pdf

Atkin, E.: Facebook creates fact-checking exemption for climate deniers. Heated . (2020). https://heated.world/p/facebook-creates-fact-checking-exemption

Auxier, B., Anderson, M.: Social Media Use in 2021. Pew Research Center. (2021). https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2021/04/PI_2021.04.07_Social-Media-Use_FINAL.pdf

Baron, E.: Facebook agrees to pay $40 million over inflated video-viewing times but denies doing anything wrong. The Mercury News . (2019). https://www.mercurynews.com/2019/10/07/facebook-agrees-to-pay-40-million-over-inflated-video-viewing-times-but-denies-doing-anything-wrong/

Boxell, L.: “The internet, social media, and political polarisation.” (2017). https://voxeu.org/article/internet-social-media-and-political-polarisation

Bradshaw, S., Howard, P.N.: The Global Disinformation Disorder: 2019 Global Inventory of Organised Social Media Manipulation. Working Paper 2019.2. Oxford: Project on Computational Propaganda. (2019)

Cabato, R.: Death threats, clone accounts: Another day fighting trolls in the Philippines. The Washington Post . (2020). https://www.washingtonpost.com/world/asia_pacific/facebook-trolls-philippines-death-threats-clone-accounts-duterte-terror-bill/2020/06/08/3114988a-a966-11ea-a43b-be9f6494a87d_story.html

Card, C.: “How Facebook AI Helps Suicide Prevention.” Facebook. (2018). https://about.fb.com/news/2018/09/inside-feed-suicide-prevention-and-ai/

Chee, F.Y.: “Facebook in EU antitrust crosshairs over data collection.” Reuters. (2019). https://www.reuters.com/article/us-eu-facebook-antitrust-idUSKBN1Y625J

Cyberbullying Statistics. Enough Is Enough. https://enough.org/stats_cyberbullying

Dwoskin, E.: Facebook’s Sandberg deflected blame for Capitol riot, but new evidence shows how platform played role. The Washington Post . (2021). https://www.washingtonpost.com/technology/2021/01/13/facebook-role-in-capitol-protest

DZ Reserve and Cain Maxwell v. Facebook, Inc. (2020). https://www.economicliberties.us/wp-content/uploads/2021/02/2021.02.17-Unredacted-Opp-to-Mtn-to-Dismiss.pdf

Eisenstat, Y.: Dear Facebook, this is how you’re breaking democracy [Video]. TED . (2020). https://www.ted.com/talks/yael_eisenstat_dear_facebook_this_is_how_you_re_breaking_democracy#t-385134

Fischer, D.: Facebook Video Metrics Update. Facebook . (2016). https://www.facebook.com/business/news/facebook-video-metrics-update

Fisher, M., Taub, A.: “How Everyday Social Media Users Become Real-World Extremists.” New York Times . (2018). https://www.nytimes.com/2018/04/25/world/asia/facebook-extremism.html

Frenkel, S.: “How Misinformation ‘Superspreaders’ Seed False Election Theories”. New York Times . (2020). https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html

Gallagher, A.: Profit and Protest: How Facebook is struggling to enforce limits on ads spreading hate, lies and scams about the Black Lives Matter protests . The Institute for Strategic Dialogue (2020)

Gillum, J., Ellion, J.: Sheryl Sandberg and Top Facebook Execs Silenced an Enemy of Turkey to Prevent a Hit to the Company’s Business. ProPublica . (2021). https://www.propublica.org/article/sheryl-sandberg-and-top-facebook-execs-silenced-an-enemy-of-turkey-to-prevent-a-hit-to-their-business

Gonzalez, R.: “Facebook Opens Its Private Servers to Scientists Studying Fake News.” Wired . (2018). https://www.wired.com/story/social-science-one-facebook-fake-news/

Guhl, J., Davey, J.: Hosting the ‘Holohoax’: A Snapshot of Holocaust Denial Across Social Media . The Institute for Strategic Dialogue (2020).

Hao, K.: “How Facebook got addicted to spreading misinformation”. MIT Technology Review . (2021). https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation

Horwitz, J., Seetharaman, D.: “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” Wall St Journal (2020)

Internet use and political polarization, Boxell, L., Gentzkow, M., Shapiro, J.M.: Proc Natl. Acad. Sci. 114 (40), 10612–10617 (2017). https://doi.org/10.1073/pnas.1706588114

Johnson, S.L., et al.: Understanding echo chambers and filter bubbles: the impact of social media on diversification and partisan shifts in news consumption. MIS Q. (2020). https://doi.org/10.25300/MISQ/2020/16371

Article   Google Scholar  

Johnson, N.F., Velásquez, N., Restrepo, N.J., et al.: The online competition between pro- and anti-vaccination views. Nature 582 , 230–233 (2020). https://doi.org/10.1038/s41586-020-2281-1

Jones, J.: In Election 2020, How Did The Media, Electoral Process Fare? Republicans, Democrats Disagree. Knight Foundation . (2020). https://knightfoundation.org/articles/in-election-2020-how-did-the-media-electoral-process-fare-republicans-democrats-disagree

Kantrowitz, A.: “Facebook Is Still Prioritizing Scale Over Safety.” Buzzfeed.News . (2019). https://www.buzzfeednews.com/article/alexkantrowitz/after-years-of-scandal-facebooks-unhealthy-obsession-with

Kendall, B., McKinnon, J.D.: “Facebook Hit With Antitrust Lawsuits by FTC, State Attorneys General.” Wall St. Journal. (2020). https://www.wsj.com/articles/facebook-hit-with-antitrust-lawsuit-by-federal-trade-commission-state-attorneys-general-11607543139

Lauer, D.: [@dlauer]. And yet people believe them because of misinformation that is spread and monetized on facebook [Tweet]. Twitter. (2021). https://twitter.com/dlauer/status/1363923475040251905

Lauer, D.: You cannot have AI ethics without ethics. AI Ethics 1 , 21–25 (2021). https://doi.org/10.1007/s43681-020-00013-4

Lavi, M.: Do Platforms Kill? Harvard J. Law Public Policy. 43 (2), 477 (2020). https://www.harvard-jlpp.com/wp-content/uploads/sites/21/2020/03/Lavi-FINAL.pdf

LeCun, Y.: [@ylecun]. Does anyone still believe whatever these people are saying? No one should. Believing them kills [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363923178519732230

LeCun, Y.: [@ylecun]. The section about FB in your article is factually wrong. For starter, AI is used to filter things like hate speech, calls to violence, bullying, child exploitation, etc. Second, disinformation that endangers public safety or the integrity of the democratic process is filtered out [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1364010548828987393

LeCun, Y.: [@ylecun]. As attractive as it may seem, this explanation is false. [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363985013147115528

Levin, S.: ‘They don’t care’: Facebook factchecking in disarray as journalists push to cut ties. The Guardian . (2018). https://www.theguardian.com/technology/2018/dec/13/they-dont-care-facebook-fact-checking-in-disarray-as-journalists-push-to-cut-ties

Mac, R.: “Growth At Any Cost: Top Facebook Executive Defended Data Collection In 2016 Memo—And Warned That Facebook Could Get People Killed.” Buzzfeed.News . (2018). https://www.buzzfeednews.com/article/ryanmac/growth-at-any-cost-top-facebook-executive-defended-data

Mac, R., Silverman, C.: “Mark Changed The Rules”: How Facebook Went Easy On Alex Jones And Other Right-Wing Figures. BuzzFeed.News . (2021). https://www.buzzfeednews.com/article/ryanmac/mark-zuckerberg-joel-kaplan-facebook-alex-jones

Mainstreaming Extremism: Social Media’s Role in Radicalizing America: Hearings before the Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce, 116th Cong. (2020) (testimony of Tim Kendall)

Meade, A.: “Facebook greatest source of Covid-19 disinformation, journalists say”. The Guardian . (2020). https://www.theguardian.com/technology/2020/oct/14/facebook-greatest-source-of-covid-19-disinformation-journalists-say

Oremus, W.: The Big Lie Behind the “Pivot to Video”. Slate . (2018). https://slate.com/technology/2018/10/facebook-online-video-pivot-metrics-false.html

Propagating and Debunking Conspiracy Theories on Twitter During the 2015–2016 Zika Virus Outbreak, Michael J. Wood, Cyberpsychology, Behavior, and Social Networking. 21 (8), (2018). https://doi.org/10.1089/cyber.2017.0669

Rajagopalan, M., Nazim, A.: “We Had To Stop Facebook”: When Anti-Muslim Violence Goes Viral. BuzzFeed.News . (2018). https://www.buzzfeednews.com/article/meghara/we-had-to-stop-facebook-when-anti-muslim-violence-goes-viral

Rosalsky, G.: “Are Conspiracy Theories Good For Facebook?”. Planet Money . (2020). https://www.npr.org/sections/money/2020/08/04/898596655/are-conspiracy-theories-good-for-facebook

Silverman, C., Mac, R.: “I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation. BuzzFeed.News . (2020). https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo

Stecklow, S.: Why Facebook is losing the way on hate speech in Myanmar. Reuters . (2018). https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/

Stoller, M.: Facebook: What is the Australian law? And why does FB keep getting caught for fraud?. Substack. (2021). https://mattstoller.substack.com/p/facecrook-dealing-with-a-global-menace

The spread of true and false news online, Soroush Vosoughi, Deb Roy, Sinan Aral, Science. 359 (6380), 1146–1151. https://doi.org/10.1126/science.aap9559

The White House 45 Archived [@WhiteHouse45]: “These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!” [Tweet]. Twitter. (2020) https://twitter.com/WhiteHouse45/status/1266342941649506304

UNICEF.: UNICEF poll: More than a third of young people in 30 countries report being a victim of online bullying. (2019). https://www.unicef.org/press-releases/unicef-poll-more-third-young-people-30-countries-report-being-victim-online-bullying

Download references

Author information

Authors and affiliations.

Urvin AI, 413 Virginia Ave, Collingswood, NJ, 08107, USA

David Lauer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David Lauer .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Cite this article.

Lauer, D. Facebook’s ethical failures are not accidental; they are part of the business model. AI Ethics 1 , 395–403 (2021). https://doi.org/10.1007/s43681-021-00068-x

Download citation

Received : 13 April 2021

Accepted : 29 May 2021

Published : 05 June 2021

Issue Date : November 2021

DOI : https://doi.org/10.1007/s43681-021-00068-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us

Is It Ethical To Work At Facebook?

The trove of internal research findings and other documents brought forward by Facebook whistleblower Frances Haugen brought further definition to a series of harms caused by the platform that many in civil society and academia have described for years. The documents bring a persistent pattern into sharp relief-- again and again, Facebook’s (now Meta’s) leadership prioritizes commercial interests over addressing harms to people.

These decisions appear to weigh heavily on employees of the company, whose posts on internal message boards reveal anguish. In her article on the Facebook Papers, titled History Will Not Judge Us Kindly , The Atlantic ’s Executive Editor Adrienne LaFrance summarized the situation:

Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more…. And again and again, staffers say, Facebook’s leaders ignore them.

The disconnect seems especially hard for employees who joined the company to help it deliver on its stated mission to “give people the power to build community and bring the world closer together.” I was struck, in particular, by one quote from an employee on Facebook's internal message board, Workplace, after the January 6 insurrection at the U.S. Capitol that was reported in The Washington Post . “I’m struggling to match my values with my employment here,” the employee said. “I came here hoping to affect change and improve society, but all I’ve seen is atrophy and abdication of responsibility."

Indeed, such abdication has contributed to some pretty grisly outcomes- from teens experiencing suicidal thoughts (and presumably some unknown percentage acting upon them) to the facilitation of human trafficking and ethnic genocide . Some former employees have acknowledged the rising body count. When Sophie Zhang-- another whistleblower-- left the company because it failed to take action on use of the platform for political manipulation, she confessed in a leaving post, “I know that I have blood on my hands now.”

Given the evidence, is it possible to go on drawing a paycheck from the company in good conscience? How should employees reckon with the ethical dilemma of earning a living at a company that prizes profits over the well being of the billions of people that use its services, or, even if they don’t have an account on Facebook or Instagram or WhatsApp, are nevertheless impacted by the externalities these products produce?

Over the past couple of weeks, I’ve put that question to a variety of individuals who each bring different perspectives to it. What emerged is not an easy yes or no answer, but a set of considerations that may be useful to someone trying to answer the question for themself.

One important caveat: nearly everyone I talked with suggested the answer to this question may be different for different people, who must each bring their own life circumstances and personal considerations to it. In recent conversations , both Dia Kayyali, a human rights and tech activist and director of advocacy at Mnemonic , and Ifeoma Ozoma, founder of Earthseed and the creator of the Tech Worker Handbook , reminded me of the axiom “there is no ethical consumption under capitalism.” Indeed, more than one of the individuals I spoke with acknowledged that most jobs eventually present ethical challenges to varying degrees. But the situation at Facebook is particularly acute given the scale of the company’s impact, and is therefore worthy of consideration.

  • You can’t trust the company’s leadership.

The ethical problems at Facebook are uniquely tied to the way the company is led. Don Heider, the Executive Director of the Markkula Center for Applied Ethics at Santa Clara University, told me that the way the company is governed is simply unhealthy. “What's frustrated people, especially about Facebook, is that because it has a CEO who is also the chair of the board, he ultimately is responsible to himself and no one else.”

For a company to be well governed-- especially one that operates at the scale and degree of societal impact of Facebook-- it requires the board to hold the CEO to account and expect the company to make ethical decisions. A board must comprise individuals “that really care about the company and also care about humanity-- who will make decisions for profit, but also be concerned with good, be concerned with human beings,” said Heider. “And so if you have that, then you can have some kind of balance in there.”

Zuckerberg is an acute part of the problem, said Justin Sherman, cofounder of Ethical Tech at Duke University. But Sherman also points to “other actors at Facebook who are also very much behind Zuckerberg and perpetuating a lot of harm.” Sherman includes in that group people such as the company’s Chief Operating Officer, Sheryl Sandberg, and even lower ranking executives such as Policy Communications Director Andy Stone , who Sherman points out “likes to mock and bully researchers and other people on Twitter.”

Hany Farid, a professor at the University of California, Berkeley and Associate Dean of its School of Information, hit a similar note. “When day in and day out, month in and month out, year in and year out, your CEO and your CTO and your CSO and your full C-suite continues to behave in exactly the same way-- you can keep telling yourself the story that, ‘hey, look, I can affect change from the inside.’ But the reality is that doesn't seem to be the case.”

Don Heider senses a particular problem at play in Facebook’s leadership. “When I see chronically bad decisions, what I see is decisions that are made on fear rather than confidence,” he said. “Facebook to me always feels like a company run out of fear-- the fear that at any given moment, it could all blow up. And here's the sad truth: it can, especially if you're not paying attention to how you're harming people.”

  • The company’s business model is the fundamental problem.

Hany Farid suspects even a much more competent and well governed leadership may still be left with a business model that is fundamentally built on exploitation.

“I think at its core, the problem is really the business model of Facebook-- and social media in general-- just stinks,” Farid said. The company “is in the attention grabbing, ad delivery, privacy invading, outrage fueling business, and the way you attract people to your platforms is to get them angry, to get conspiratorial, salacious, hateful, outrageous content. And that's what drives business. That is the model that we have learned from social media works.”

Jared Harris, Associate Professor of Business Administration at the Darden School of Business at the University of Virginia and an expert on business ethics and corporate governance, allows that not all the facts are in on social media and its effects on society. But, Harris said that “if the fact pattern parallels what we saw play out in Big Tobacco a decade or two ago, then the only thing that's really different is the harms themselves.”

So, rather than lung cancer or emphysema, think of psychological or social harms. The Facebook whistleblower leaks fit that pattern-- we now know the company conducted internal research on such harms, but kept the results hidden from the public. “And so from an ethics perspective," said Harris, "all of the things that are problematic about suppressing the way in which cigarettes were harmful to smokers could potentially apply here. We can think of other examples as well-- there's a lot of stuff coming out recently about how much oil companies knew about climate change and again, suppressed the science internally. If indeed the harms are verifiable and the internal evidence was suppressed, we've seen similar examples play out before.”

Harris sees a disconnect between how the company makes money and the overall value proposition to its users. “What seems to have happened is Facebook seems to have gotten away from thinking carefully about the end user experience. Not just the way in which the end user experience could maximize advertising dollars, but the entirety of the end user experience-- what are the pros and cons, are there limits to how much we want the technology to act and behave. I think that's the headline here,” he said. “The conclusion to me is stakeholders who matter need a voice- and it looks to me like, you know, 11-year-old girls who use Instagram don't have much of a voice.”

  • You can make an important difference, but the sum of the good will not redeem the whole.

Even if you acknowledge the problems in the company’s leadership and with its core business model, it may still be possible to do good work that impacts many people’s lives, especially given the magnitude of Facebook’s user base. While encouraging “a very realistic assessment” of the ethical challenges in working at or with Facebook, Dia Kayyali told me, “I have no doubt that there are people inside of Facebook who have probably saved lives,” and that there “are probably engineers who pushed for things that made a very, very real impact in people's lives.” Kayyali suggests a lot of this work falls under the category of harm reduction.

Johnny Mathias, Deputy Senior Campaign Director for Media, Democracy and Economic Justice at the advocacy organization Color of Change, said the world needs people willing to make such ethical judgments working at these firms. “I think the thrust of your question is, when people are doing work and they seem to be ignored, is it hopeless? It is probably a really challenging position to be in right now. We need to change the both external and internal environment in which people work. We need well-intentioned folks to be able to feel comfortable at technology companies, because I don't want technology designed only by the people who have no ethical questions about working on technology. That's how we get things that aren't designed for the needs of Black communities. If folks who design terrible, harmful tech are the only ones who are willing to participate in industry, that doesn't create the tech future that I think anyone wants.” He pointed to Color of Change’s “ Beyond the Statement ” tech platform, which encourages tech companies to focus on hiring people with civil rights expertise, for instance.

Some jobs are more ethically freighted, of course."If you work at Facebook some jobs do more harm than others,” said Justin Sherman. “Obviously people taking down non-consensual pornography are doing a good thing. There are people who work on the executive and lobbying teams who basically just lie to the media and bully critics all day, and that's not good.”

Yaël Eisenstat, who served as Facebook’s Global Head of Elections Integrity Operations for political ads in 2018 and is today a Future of Democracy Fellow at Berggruen Institute and a member of the Tech Policy Press masthead, said she hears from people weighing whether to work for the company frequently.

“My general advice to anyone weighing the pros and cons of working at Facebook, especially after everything that the public now knows, is this: If you believe you can make a difference within the constraints of a company that will not change its business model, growth mentality, and desire to dominate the landscape, then by all means, you should do the work," she said. "I would never discourage someone from trying to help protect the public from online abuse or harms.”

The key, Eisenstat said, is recognizing the bounds of what is possible. “If you are hoping to fundamentally change or affect how the leadership makes business and political decisions, then I would advise against it. I never went to Facebook thinking I would single-handedly change how the company operates. But now, more than ever, it would be a non-starter for anyone who still believes that internal employees will persuade Mark Zuckerberg, Sheryl Sandberg, and the rest of the executive team to fundamentally change their priorities.”

  • The tech sector is an ethical minefield generally.

Gone are the halcyon days when the tech sector represented hope and opportunity without all the ethical baggage. “I have students who are at Google and Apple and Amazon and YouTube,” said Hany Farid. “All of them are looking around the tech sector like, what happened? I came here five years ago and we were the golden city on the hill, right? We were not Wall Street. We intentionally didn't go to New York and work at Wall Street because we want to be the good guys. And it turns out we're sort of the bad guys-- and for these young kids, it’s brutal.”

Ifeoma Ozoma said it’s hard to judge these companies differently on the merits. “I'm not going to tell anyone to leave their job because okay, you leave Facebook and you go to Alphabet-- are things any different there?”

Indeed, Don Heider sees similar systemic problems in other tech firms, such as Amazon . “Amazon to me is the closest cousin to Facebook in terms of current issues they face. They're both companies that have an interesting, good idea at the core, and they've been wildly successful beyond their or anyone's dreams. But with that huge growth and the huge profits and the huge success, there comes a moment of reckoning where you realize something is going wrong. So for Amazon, it's how they treat employees-- whether workers can take a few days off and not get fired, whether they can take a pregnancy leave.” Heider believes the only recourse is for these companies to step back and consider what sustainability will look like over a much longer time horizon- decades, not years, and certainly not quarters.

  • You’ll have to be comfortable with the judgment of your family and friends.

Even if other tech firms face similar ethical issues, “people who work at Facebook are going to be facing a more awkward Thanksgiving than perhaps those who work at Google or Amazon,” said Color of Change’s Johnny Mathias. There has been substantial brand damage to Facebook, which will likely follow it even as it rebrands to Meta. Last week, a CNN poll found that “roughly three-quarters of adults believe Facebook is making American society worse…. with about half saying they know somebody who was persuaded to believe in a conspiracy theory because of the site's content.”

“I remember back in the day when I was still at Dartmouth College,” said Hany Farid. “If one of my students got an internship or a job at Facebook, they would have that tattooed on their face. I mean, that was like grabbing the brass ring. That was really the proudest moment. I will tell you today that I have many, many students-- both current and former-- who tell me, ‘I'm embarrassed to tell people that I work at Facebook.’”

The evidence of Facebook’s harms, he said, is now more well known to the public- not just to academics and activists. “Can you work at a company whose products have led to a body count that is not insignificant? Think Myanmar, think Ethiopia, Sri Lanka, the Philippines, India, Brazil.” Pointing to COVID-19 misinformation, Farid points out the death toll worldwide. “At least a fraction of those are as a result of Facebook and WhatsApp and Instagram. I don't know how you work there. I really don't.”

  • Your day of reckoning will come- be ready to weigh your options.

Don Heider said for every person working at a company experiencing a crisis, “there is a day of reckoning. That comes when they feel like they can no longer work towards good and have an effect, whether they think the needle is moving in some way.” For many, the signal will come much as it did for Haugen or for Zhang- when an initiative to change things or address harms dies when it reaches management. “I have great admiration for the people that are in there fighting the fight, trying to do good things. I really do,” said Heider. “But at the end of the day, if every initiative that started to change anything for the good dies in the C-suite, then you get a very clear message, no matter what's coming out of the CEO's mouth. You see the reality.”

Jared Harris said if a former student working in a difficult situation and contemplating blowing the whistle approached him with the question about what to do, he’d try to help them assess their own objectives. Some might want to take action, or perhaps become whistleblowers themselves. “Whistleblowers face a lot of tough repercussions, and I'm not sure I would expect a student or former student to have thought through, are they ready to take on the personal costs involved with that? And then that might raise the question of, is there a third way? Is there a way you can be influential inside the organization to accomplish things that you think would increase the likelihood of good outcomes? I mean, these are the things I might ask them to consider, but it's hard to advise someone what to do-- but I think we all face some version of that in our own lives.”

Ifeoma Ozoma said workers can make change inside companies, if they are willing to organize. “Where folks have more power than I think they recognize is a team like the site reliability engineers, who have actual control over the way that the platform functions and whether it functions or not,” she said. “And that's where I think if there were more collective action-- if a team like that, or even a large proportion of the team decided to go on strike-- that would be the type of thing that would get the attention of company leaders. If the sales team, or a large vertical within the sales team decided to go on strike, that would be the type of thing that I think would change things in a way that we haven't seen from all of the hearings, from all of the lawsuits and whatever else that's taking place externally.” The resource site she developed, the Tech Worker Handbook , offers practical advice for employees considering their options.

  • Ultimately, government must act to relieve the ethical burden.

It is past time for Congress and regulators to take action in the public interest. Just as guardrails such as safety regulations or requirements to mitigate pollution help to relieve the ethical burden for any number of industries that produce externalities, clear rules could make things better for Facebook.

“We need regulatory solutions,” said Johnny Mathias. “We need action to actually empower folks and to make it more of a tenable place to work. We need whistleblower protections. We need a regulatory environment that supports the decisions that are being made there.”

People who want to work at ethical tech firms and use ethical tech products should consider what they can do to advocate for new laws, and to educate others about these issues.

“I think that if people were pushing their members of Congress as hard as Facebook's 15 lobbying teams both internally and externally were, then maybe we would have more progress" from lawmakers, said Ifeoma Ozoma. "Those folks are all being funded by Facebook, even though they're sitting up there on either side of the aisle, pretending to actually care about regulating the company. Those folks all have constituents who could also be handling them, but their constituents aren't.”

Many of the people who must weigh whether to stay at Facebook in the post-whistleblower age are highly skilled, and have many options. “If you have the freedom to go somewhere else or do something else, then you should do that,” said Ifeoma Ozoma. But the key thing, no matter what, is to be honest with yourself and those around you. “If you're honest about what's going on, then that's about all that we can ask of rank and file workers anywhere,” she said.

As for Facebook’s leadership? “Every company faces these moments of existential threat and crisis,” said Don Heider. “What, to me, a good CEO does-- a good management, a good board-- they step back and they say, ‘Oh my god, we really have to take a moment of reckoning and think about what we're doing, and can we continue it, and what course corrections can we take now at this crucial moment to get us back on course.”

That prescription is quite a contrast to the company’s reaction to the past two months of revelations. In his remarks during Facebook’s third quarter earnings call, Mark Zuckerberg called the media scrutiny “a coordinated effort to selectively use leaked documents to paint a false picture of our company.” In his mind, he remains-- like many an emperor before him-- above reproach.

is facebook's business model ethical by design

Our content. Delivered.

Join our newsletter on issues and ideas at the intersection of tech & democracy

You have successfully joined our subscriber list.

Loading.....

  • New: Try my AI Bot

Is Facebook unethical by design? A great case study on digital ethics, power, responsibility and regulation (via Monday Note)

(This is a guest post by The Futures Agency curator Peter Van )

In Facebook’s Provocations of the Week , Frederic Filloux is presenting a wide-ranging discussion on the business practices of the social media behemoth. The post includes an interesting video about the need to break-up the big four (GAFA). In a related conversation with Scott Galloway , Arete Research’s founder Richard Kramer offers a compelling analogy between Google and JPMorgan (at 3:24)

“Imagine if JPMorgan owned the New York Stock Exchange, was the sole market-maker on its own equity, the exclusive broker for every other equity in the market, ran the entire settlement and clearing system in the market, and basically wouldn’t let anyone see who had bought shares and which share or certificate or number they bought… That is Google’s business model”

This may also be Facebook’s strategy, especially after announcing their intention to merge Messenger, Instagram, and WhatsApp . Just one week ago, at the World Economic Forum in Davos, Facebook’s chief operating officer Sheryl Sandberg delivered the…oh…. 87th iteration of the company’s “we are sorry” tune by admitting:  “We did not anticipate all of the risks from connecting so many people. We need to earn back trust .”

Just a couple of days later, it was revealed that Facebook  paid  teens (and older people) to be allowed to harvest data from their iPhones. Oh yes.. but it was just a research-app and people consented the use of their data; what could possibly go wrong? Apple thought differently, and took the app from their App Store. Cynically, Facebook is not alone: same happened two days later with respect to a Google App. The only positive signal we spotted last week was the appointment of three privacy experts  that were previously strong critics of Facebook.  Bruce Schneier  – world-respected security and privacy expert – welcomed the appointments by  saying : “I know these people. They’re ethical, and they’re on the right side. I hope they continue to do their good work from inside Facebook.” Despite everything, FB’s performance – from a profit and growth perspective – was once again strong, leading to a 10% jump in share price: after all,  2.3 Billion people can’t be wrong . Or what?

This week, Facebook celebrated its 15th anniversary : the occasion for many journalists to sharpen their critique on the company that only has one metric: user growth. The anniversary was labeled by The Guardian as  “The Death of the Private Self”  – Mark Zuckerberg kind-of ignored all, and steered the focus away from Facebook to “the internet”, in an attempt to fool us into conflating the two (I hear this is a common problem in many developing countries)

Facebook has collected more users than there are followers of Christianity. More than 2.3 billion people use the service every single month. In some parts of the world, Facebook has become synonymous with the internet…

But Germany crashed FB's birthday party: the German watchdog has carried out a probe into the social network by ordering Facebook to gather and mix less data . At the same time, Angela Merkel left Facebook : well, it's more complicated than that. This is about the fan page of Germany's chancellor, not her personal account, which still exists. Merkel ignored her party's leadership to do something smart with that legacy, and decided to simply end her chancellor's fan page. Merkel's step raises fundamental questions:

Can a prominent politician single-handedly choose to leave Facebook? Doesn't her digital legacy need to be archived along with her files, letters and memos? And doesn't posterity have the right to know how Angela Merkel communicates in a sphere which she herself has described as “Neuland,” a new frontier?

(On that note,  Gerd left Facebook in March 2018 🙂

As long as money is everything, we’ll see more of what some people call  ethics dumping , and a lot more talk than walk when it’s about operating a company in the interest of humans and society at large. Facebook is clearly a company that needs to be regulated, better sooner than later. But it will require more than an 40-Expert Board .

is facebook's business model ethical by design

RELATED: Gerd’s proposal of a  Digital Ethics Council  with international authority is also percolating (more on that soon).

CHATGPT: FUTURE OF AI

Latest book, cookies & policy.

By using this site you agree to the placement of cookies in accordance with our terms and policy.

BEST OF VIRTUAL KEYNOTES

Click play to watch this new video.

You are using an outdated browser. Please upgrade your browser to improve your experience.

UNSW Home

Facebook takes its ethics into the metaverse - and critics are worried

Facebook is under fire (again) for its questionable business ethics – what exactly did they do wrong and what will happen as a result?

Teenage girl sad on smartphone

Facebook has faced criticism for not acting on its research which found that Instagram increases poor self-image and mental health in teenage girls. Picture: Shutterstock

In September 2021,  The Wall Street Journal  published a series of damning articles on Facebook. Based on internal documents, several ethically questionable practices within the technology company were highlighted. 

Later revealed to have been leaked by whistleblower Frances Haugen, a product manager in Facebook’s civic integrity team, the documents included revelations that Facebook’s own research showed Instagram (which it owns)  exacerbates poor self-image and mental health in teenage girls,  as well as the existence of ‘VIP users’ that are  exempted from certain platform rules . 

This is just the latest big scandal to hit the tech company, which has also been under scrutiny for its poor handling of user data in the  Cambridge Analytica scandal  (2018), accusations of inciting  genocide in Myanmar  (2018), and spreading misinformation and ‘fake news’ during the 2016 US Presidential Election  (2016), as well as causing consumer anger over their  ‘mood’ experiment analysis and manipulation  (2014). 

Manheim, PA - October 1, 2016: The oversized crowd waves signs as Donald J. Trump speaks at his campaign political rally Lancaster County.

In the recent past, Facebook has been accused of spreading fake news during the 2016 US presidential election. Photo: Shutterstock

So, what does this latest ethical controversy mean for the Silicon Valley monolith, and will it affect how Facebook - and other big tech companies - conduct themselves?

See also:   Should you know (or care) how your data is being used before you consent?  

Why are consumers and policymakers upset with Facebook?  

According to  Rob Nicholls , Associate Professor in Regulation and Governance at the  UNSW Business School,  one of the reasons policymakers and consumers alike are riled up at Facebook’s behaviour is because of what is perceived to be a continued lack of responsiveness to regulatory intervention. Combined with the separate issue – that they were sitting on a trove of information that showed Facebook knew it was causing harm and chose not to act – makes for a poor look. 

“They didn’t use the information they had to change the approach,” he says. “You’ve got something that looks not dissimilar to big tobacco. Yes, there are harm issues, but no, we’re not going to talk about it.” 

Finding this was hidden is particularly alarming to regulators because it brings up the question of what else don’t we know, A/Prof. Nicholls says. There is then a ‘piling on effect’ by other concerned parties. 

“In Australia, the ACCC’s chairman, Rod Sims saying, ‘ Well why isn’t Facebook negotiating with SBS under the news media bargaining code? ‘ All of these things tend to compound when the company is front and centre in the news.” 

See also:   AI: friend or foe? (and what business leaders need to know)

We are now more aware of technology shortfalls and dangers  

A/Prof. Nicholls says there is now more awareness by consumers and policymakers about the shortcomings of Facebook and other big tech companies, with the COVID-19 pandemic leading to a stronger realisation about how much we rely on Facebook’s platforms (which include Instagram and WhatsApp). 

He also points out that in Australia, Facebook’s taking down of Australia-based pages during debate over the  News Media Bargaining Code  into parliament has also drawn attention to Facebook’s lack of competition and excess of control in the space. 

“If Facebook can take down our Health Service website, which we’re relying on to get information on the pandemic or could cut off 1800 Respect because Facebook thinks it’s a news media business ... Suddenly there’s that realisation of how ingrained social media companies are.” 

Facebook scroll.

A/Prof. Nicholls says consumers and policymakers have become more aware of how much we rely on Facebook and its platforms Photo: Unsplash / Joshua Hoehne

“Ten years ago, [the leadership mantra of] Facebook was ‘Move fast and break things!’ That’s great. You’re a small start-up,” says A/Prof. Nicholls. “But Facebook today, your revenue is $US85 billion and you’re part of the day-to-day life of the vast majority of people. You actually have to take some responsibility.” 

See also:  AI: What are the present and future opportunities?

Meta-bad timing: get your house in order first  

To add fuel to the fire of public debate, Facebook announced a rebranding of its parent company from ‘Facebook’ to ‘Meta’. Mark Zuckerberg’s company would now shift its focus to working on creating a ‘metaverse’ – a 3D space where people can interact in immersive online environments. 

While hiring thousands to create a metaverse might have been an exciting announcement at other times for the company formerly known as Facebook, the public and lawmakers were quick to mock and question the move as one that swept systemic issues under the carpet. 

Meta as in “we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society… for profit!” https://t.co/jzOcCFaWkJ — Alexandria Ocasio-Cortez (@AOC) October 28, 2021
Instead of Facebook changing its policies to protect children and our democracy it has chosen to simply change its name to Meta. Different name, same threat to our nation. — (((DeanObeidallah))) (@DeanObeidallah) October 28, 2021

“It should have sounded like a really great success story: Facebook is going to invest in 10,000 new tech jobs in Europe,” says A/Prof. Nicholls. 

“But instead, the answer has been, ‘Just a minute, if you’re prepared to spend that much money, why not fix the problems in the services that you currently provide, with far fewer jobs?’” 

He also points out how the identified issue of body image might in fact be amplified in the metaverse. 

“Are you now going to create a metaverse where, ‘We’re going to deal with body shape because in your virtual reality, you’ll look perfect’? 

“That is going to cause distress in itself.” 

See also:   Instagram can make teens feel bad about their body, but parents can help. Here's how

Will the debate affect regulation around tech companies?  

According to A/Prof. Nicholls, we could be seeing a moment where agencies and government come together to protect consumers.  

“It’s possible, but it takes a bit of political will,” he says. “In the past, Facebook, Google, and to a lesser extent, Amazon, Microsoft, and Apple have each been able to delay and deflect, partly because it takes a big political decision.” 

Instagram and Facebook on phone

Moves to curtail the power of Facebook and its platforms might be politically popular. Photo: Unsplash / Brett Jordan

A/Prof. Nicholls says the difference now is that there is the realisation that a political decision on this is going to be popular, making it far more likely that it will be taken. But he also points out it’s not likely to be a ‘break them up’ kind of solution. 

“Part of the reason that Facebook does touch so many of us on such a regular basis is what they offer is really useful,” he says. “The issue that flows from that is, how do you make sure that the business operates effectively without stifling innovation in that area?” 

See also:   How to avoid the ethical pitfalls of artificial intelligence and machine learning

How can other AI-based companies avoid this situation?  

While A/Prof. Nicholls does not expect to see policy changes from big tech companies (“because policy change is an admission”), he does expect that we will see some practical changes by other companies that consider the issues faced by Facebook. 

“Ultimately, if you do some research and you find really bad outcomes, if you act on those, then you’re not going to have a problem a little bit later of a whistleblower pointing out that you’ve suppressed that research,” he says, referring to Haugen. 

There is a simple way to avoid this issue though, A/Prof. Nicholls points out. By acting ethically as a business, one can avoid these problems and achieve good business outcomes without having to change too much. And for businesses that are built around algorithms, this means ensuring you’ve embedded ethical approaches throughout the AI design. 

“Ethical behaviour can be built into the design of AI. Some of that actually means that you end up with better outcomes from your AI because you ensure that you actually think about it first.” 

“It really doesn’t matter whether you’re a small start-up, doing analysis of big data, or you’re a very big platform-based company. Actually, thinking about those design processes is really important.” 

See also:   Can AI replace a judge in the courtroom?

See also: Listen - The Business of AI podcast

Disclaimer: Associate Professor Rob Nicholls and his team have received funding from Facebook for his research into models, people, and their interaction with Facebook.  

Read the Latest on Page Six

Recommended

‘history will not judge us kindly’: facebook employees rip zuckerberg in leaked messages.

  • View Author Archive
  • Email the Author
  • Get author RSS feed

Contact The Author

Thanks for contacting us. We've received your submission.

Facebook employees say Mark Zuckerberg’s obsession with growth has overridden ethical concerns and allowed hate speech and incitements to violence to spread unchecked, internal messages leaked to media outlets show. 

“History will not judge us kindly,” one staffer reportedly wrote on the day of the Jan. 6 Capitol riots, which were organized partially through Facebook.

“We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control,” wrote another Facebook staffer, according to the Atlantic . 

Facebook employees who had spent months raising concerns about the social media company’s engagement-obsessed algorithms pushing users toward extreme and conspiratorial content felt betrayed by a lack of action from the company, according to the report. 

In August 2020, one employee working on Facebook’s “integrity” team griped that any proposed design changes that would reduce users’ exposure to extreme content like the QAnon conspiracy theory had been consistently sidelined in favor of increasing user engagement.   

Facebook employees

The employee accused the company of being “willing to act only after things had spiraled into a dire state,” according to the Atlantic. 

“Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB,” the employee added. “It has been painful to observe.” 

Mark Zuckerberg

Other Facebook employees, meanwhile, accused Zuckerberg of personally intervening to protect political figures who violated the company’s content moderation rules, according to the Financial Times . 

In one such case in 2019, Facebook moderators took down a video that falsely said that abortions are “never medically necessary.” 

Facebook headquarters

After Republican politicians including Texas Sen. Ted Cruz complained about the move, Zuckerberg was personally involved with Facebook’s decision to put the video back up, according to the outlet. 

Zuckerberg also allegedly gave into the Vietnam government’s censorship demands last year to avoid losing an estimated $1 billion in annual revenue from the country, insiders told the  Washington Post .

Ahead of an election in Vietnam last year, Zuckerberg personally decided to censor Facebook posts from anti-government pages because he argued going offline in the country would do more harm to free speech, the outlet said.

Between July and December last year, Facebook took down more than 2,200 posts by Vietnamese users — compared to just 834 in the first six months of 2020.

Facebook headquarters

Many Facebook employees also reportedly believe the company unfairly makes exceptions to its false news policy when dealing with certain publishers. 

Moves to remove “repeat offenders” from Facebook were dropped after being “influenced by input from Public Policy,” an employee said in a December 2020 memo reported by the Financial Times, in reference to Facebook’s powerful Washington, DC-based arm led by former Bush administration staffer Joel Kaplan. 

In particular, Facebook gives special treatment to conservative publishers including Breitbart, Diamond and Silk, Charlie Kirk and PragerU, the employee said. 

Facebook office

Facebook did not immediately reply to requests for comment on the Atlantic and Financial Times reports. 

Company spokesperson Joe Osborne told the Financial Times: “At the heart of these stories is a premise which is false. Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie. The truth is we’ve invested $13bn and have over 40,000 people to do one job: keep people safe on Facebook.” 

Monday’s articles, which come on the same day Facebook is set to report third-quarter earnings, are part of a larger series of news stories based on documents leaked under embargo by whistleblower Frances Haugen . 

Frances Haugen

Other documents leaked by Haugen reportedly show that Facebook has allegedly misled investors and masked slowing growth among critical demographics like young users in the US , as well as failed to crack down on human trafficking on the site. 

Facebook shares were trading up 0.4 percent at $325.75 on Monday morning — but have tanked more than 15 percent since the beginning of September as Haugen has shared internal company documents with reporters and called on lawmakers to regulate the tech giant. 

Share this article:

is facebook's business model ethical by design

Advertisement

is facebook's business model ethical by design

UNSW Business School logo

Why Facebook's meta-morphosis won't fix ethics headache

Facebook is under fire (again) for its questionable business ethics – what exactly did it do wrong and what will happen as a result?

In September 2021, The Wall Street Journal published a series of damming articles on Facebook based on internal documents that revealed several questionable practices within the technology company.

Later revealed to have been leaked by whistleblower Frances Haugen, a product manager in Facebook’s civic integrity team, the documents included revelations that Facebook’s own research showed Instagram (which it owns) exacerbates poor self-image and mental health in teenage girls, as well as the existence of ‘VIP users’ that are exempted from certain platform rules .

This is just the latest big scandal to hit the tech company, which has also been under scrutiny for its poor handling of user data in the Cambridge Analytica scandal (2018), accusations of inciting genocide in Myanmar (2018), and spreading misinformation and ‘fake news’ during the 2016 US Presidential election (2016) and consumer anger over their ‘mood’ experiment analysis and manipulation (2014).

It was the organisation’s decision not to act on their own findings, according to Haugen. She told the US Senate last month this decision was part of a continued pattern that sees Facebook opt for profits over consumer wellbeing. “The company’s leadership knows how to make Facebook and Instagram safer,” she told the Senate, “but won’t make the necessary changes because they have put their astronomical profits before people.”

So, what does this latest ethical controversy mean for the Silicon Valley monolith and the other tech giants, and what lessons can it tell us about the importance of ethical decision-making in organisations?

Rob Nicholls-min.jpg

Why are consumers and policymakers upset with Facebook?

According to Rob Nicholls , Associate Professor in Regulation and Governance at the UNSW Business School, one of the reasons policymakers and consumers alike are riled up at Facebook’s behaviour is what is perceived to be a continued lack of responsiveness to regulatory intervention. Combined with the separate issue – that they were sitting on a trove of information that showed Facebook knew it was causing harm and chose not to act – makes for a poor look.

“They didn’t use the information they had to change the approach,” he says. “You’ve got something that looks not dissimilar to big tobacco. Yes, there are harm issues, but no, we’re not going to talk about it.”

Finding this was hidden is particularly alarming to regulators because it brings up the question of what else don’t we know, A/Prof. Nicholls says. There is then a ‘piling on effect’ by other concerned parties. “In Australia, the ACCC’s chairman, Rod Sims saying, ‘ Well why isn’t Facebook negotiating with SBS on under the news media bargaining code? ‘ All of these things tend to compound when the company is front and centre in the news.”

Read more: Can the law truly protect consumers from data profiling?

We are now more aware of technology shortfalls and dangers

A/Prof. Nicholls says there is now more awareness by consumers and policymakers about the shortcomings of Facebook and other big tech companies, with the COVID-19 pandemic leading to a stronger realisation about how much we rely on Facebook’s platforms (which include Instagram and WhatsApp).

He also points out that in Australia, Facebook’s taking down of Australia-based pages during debate over the News Media Bargaining Code into parliament has also drawn attention to Facebook’s lack of competition and excess of control in the space.

“If Facebook can take down our Health Service website, which we’re relying on to get information on the pandemic or could cut off 1800 Respect because Facebook thinks it’s a news media business ... Suddenly there’s that realisation of how ingrained social media companies are.”

“Ten years ago, [the leadership mantra of] Facebook was ‘Move fast and break things!’ That’s great. You’re a small start-up,” says A/Prof. Nicholls. “But Facebook today, your revenue is $US85 billion and you’re part of the day-to-day life of the vast majority of people. You actually have to take some responsibility.”

Facebook, Google, and to a lesser extent, Amazon, Microsoft and Apple have been able to delay and deflect government intervention-min.jpg

Meta-bad timing: get your house in order first

To add fuel to the fire of public debate, Facebook announced a rebranding of its parent company from ‘Facebook’ to ‘Meta’. Mark Zuckerberg’s company would now shift its focus to working on creating a ‘metaverse’ – a 3D space where people can interact in immersive online environments.

While hiring thousands to create a metaverse might have been an exciting announcement at other times for the company formerly known as Facebook, the public and lawmakers were quick to mock and question the move as one that swept systemic issues under the carpet.

“It should have sounded like a really great success story: Facebook is going to invest in 10,000 new tech jobs in Europe,” says A/Prof. Nicholls. “But instead, the answer has been, ‘Just a minute, if you’ve prepared to spend that much money, why not fix the problems in the services that you currently provide, with far fewer jobs?’”

He also points out how the identified issue of body image might in fact be amplified in the metaverse. “Are you now going to create a metaverse where, ‘We’re going to deal with body shape, because in your virtual reality, you’ll look perfect’? That is going to cause distress in itself.”

Read more: How Facebook could lose out on advertising over its content ban

Will the debate affect regulation around tech companies?

According to A/Prof. Nicholls, we could be seeing a moment where agencies and government come together to protect consumers.

“It’s possible, but it takes a bit of political will,” he says. “In the past, Facebook, Google, and to a lesser extent, Amazon, Microsoft and Apple have each been able to delay and deflect, partly because it takes a big political decision.”

A/Prof. Nicholls says the difference now is that there is the realisation that a political decision on this is going to be popular, making it far more likely that it will be taken. But he also points out it’s not likely to be a ‘break them up’ kind of solution.

“Part of the reason that Facebook does touch so many of us on such a regular basis is what they offer is really useful,” he says. “The issue that flows from that is, how do you make sure that the business operates effectively without stifling innovation in that area?”

shutterstock_552493561-min.jpg

How can other AI-based companies avoid this situation?

While A/Prof. Nicholls does not expect to see policy changes from big tech companies (“because policy change is an admission”), he does expect that we will see some practical changes by other companies that consider the issues faced by Facebook.

“Ultimately, if you do some research and you find really bad outcomes, if you act on those, then you’re not going to have a problem a little bit later of a whistleblower pointing out that you’ve suppressed that research,” he says, referring to Haugen.

There is a simple way to avoid this issue though, A/Prof. Nicholls points out. By acting ethically as a business, one can avoid these problems and achieve good business outcomes without having to change too much. And for businesses that are built around algorithms, this means ensuring you’ve embedded ethical approaches throughout the AI design.

“Ethical behaviour can be built into the design of AI. Some of that actually means that you end up with better outcomes from your AI because you ensure that you actually think about it first. It really doesn’t matter whether you’re a small start-up, doing analysis of big data, or you’re a very big platform-based company. Actually, thinking about those design processes is really important.”

Disclaimer: Associate Professor Rob Nicholls and his team have received funding from Facebook for his research into models, people and their interaction with Facebook.

You are free to republish this article both online and in print. We ask that you follow some simple guidelines .

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy

is facebook's business model ethical by design

EthicalMarkets.com

  • Hazel Henderson
  • Mission Statement
  • Ethical Markets Overview
  • Research Advisory Board
  • Hazel Henderson Bio and Awards
  • Privacy Policy
  • Presentations
  • Conferences
  • Advertising

Facebook’s ethical failures are not accidental; they are part of the business model

LaRae Long June 17, 2021 Trendspotting , Wealth of Networks , Latest Headlines , Technology

“Ethical Markets highly recommends this critique of Facebook’s business model, much in line with our own “Steering Social Media Toward Sanity“ and unpacking the “polarization” Facebook causes, compared with  Frances Moore Lappe’s the ten basic issues which 70-80% of Americans agree on!!  Must reading!

~Hazel Henderson, Editor“

By Dave Lauer

  • Opinion Paper
  • Published:  05 June 2021

AI and Ethics  ( 2021 ) Cite this article

Metrics details

Facebook’s stated mission is “to give people the power to build community and bring the world closer together.” But a deeper look at their business model suggests that it is far more profitable to drive us apart. By creating “filter bubbles”—social media algorithms designed to increase engagement and, consequently, create echo chambers where the most inflammatory content achieves the greatest visibility—Facebook profits from the proliferation of extremism, bullying, hate speech, disinformation, conspiracy theory, and rhetorical violence. Facebook’s problem is not a technology problem. It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content within an echo chamber, the algorithmic outcomes would be very different.

Facebook’s failure to check political extremism, [ 15 ] willful disinformation, [ 39 ] and conspiracy theory [ 43 ] has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence. So it naturally raised more than a few eyebrows when Facebook’s Chief AI Scientist Yann LeCun tweeted his concern [ 32 ] over the role of right-wing personalities in downplaying the severity of the COVID-19 pandemic. Critics were quick to point out [ 29 ] that Facebook has profited handsomely from exactly this brand of disinformation. Consistent with Facebook’s recent history on such matters, LeCun was both defiant and unconvincing.

In response to a frenzy of hostile tweets, LeCun made the following four claims:

[READ MORE]

  • Infrastructure & Standards
  • Information & Data
  • Intellectual Property Rights
  • Privacy & Security

What if Facebook goes down? Ethical and legal considerations for the demise of big tech

Introduction.

Facebook 1 has, in large parts of the world, become the de facto online platform for communication and social interaction. In 2017, the main platform reached the milestone of two billion monthly active users (Facebook, 2017), and global user growth since then has continued, reaching 2.6 billion in April 2020 (Facebook, 2020). Moreover, in many countries Facebook has become an essential infrastructure for maintaining social relations (Fife et al., 2013), commerce (Aguilar, 2015) and political organisation (Howard and Hussain, 2013). However, recent changes in Facebook’s regulatory and user landscape stand to challenge its pre-eminent position, making its future demise if not plausible, then at least less implausible over the long-term.

Indeed, the closure of an online social network would not in itself be unprecedented. Over the last two decades, we have seen a number of social networks come and go — including Friendster, Yik Yak and, more recently, Google+ and Yahoo Groups. Others, such as MySpace, continue to languish in a state of decline. Although Facebook is arguably more resilient to the kind of user flight that brought down Friendster (Garcia et al., 2013; Seki and Nakamura, 2016; York and Turcotte, 2015) and MySpace (boyd, 2013), it is not immune to it. These precedents are important for understanding Facebook’s possible decline. Critically, they demonstrate that the closure of Facebook’s main platform does not depend on the exit of all users; Friendster, Google+ and others continued to have users when they were sold or shut down.

Furthermore, as we examine below, any user flight that precedes Facebook’s closure would probably be geographically asymmetrical, meaning that the platform remains a critical infrastructure in some (less profitable) regions, whilst becoming less critical in others. For example, whilst Friendster started to lose users rapidly in North America, its user numbers were simultaneously growing, exponentially, in South East Asia. It was eventually sold to a Filipino internet company and remained active as a popular social networking and gaming platform until 2015. 2 The closure of Yahoo! GeoCities, the web hosting service, was similarly asymmetrical: although most sites were closed in 2009, the Japanese site (which was managed by a separate subsidiary) remained open until 2019. 3 It is also important to note that, in several of these cases, a key reason for user flight was the greater popularity of another social network platform: namely, MySpace (Piskorski and Knoop, 2006) and Facebook (Torkjazi et al., 2009). Young, white demographics, in particular, fled MySpace to join Facebook (boyd, 2013).

These precedents suggest that changing user demographics and preferences, and competition from other social networks such as Snapchat or a new platform (discussed further below) could be key drivers of Facebook’s decline. However, given Facebook’s pre-eminence as the world’s largest social networking platform, the ethical, legal and social repercussions of its closure would have far graver consequences than these precedents. Rather, the demise of a global online communication platform such as Facebook could have catastrophic social and economic consequences for innumerable communities that rely on the platform on a daily basis (Kovach, 2018), as well as the users whose personal data Facebook collects and stores. 

Despite the high stakes involved in Facebook’s demise, there is little research or public discourse addressing the legal and ethical consequences of such a scenario. The aim of this article is therefore to foster dialogue on the subject. Pursuing this goal, the article provides an overview of the main ethical and legal concerns that would arise from Facebook’s demise and sets out an agenda for future research in this area. First, we identify the headwinds buffeting Facebook, and outline the most plausible scenarios in which the company — specifically, its main platform — might close down. Second, we identify four key ethical stakeholders in Facebook’s demise based on the types of harm to which they are susceptible. We further examine how various scenarios might lead to these harms, and whether existing legal frameworks are adequate to mitigate them. Finally, we provide a set of recommendations for future research and policy intervention.

It should be noted that the legal and ethical considerations discussed in this article are by no means limited to the demise of Facebook, social media, or even “Big Tech”. In particular, to the extent that most sectors in today’s economy are already, or will soon become, data-driven and data-rich, these considerations, many of which relate to the handling of Facebook’s user data, are ultimately relevant to the failure or closure of any company handling large volumes of personal data. Likewise, as human interaction becomes increasingly mediated by social networks and Big Tech platforms, the legal and ethical considerations that we address are also relevant to the potential demise of other social networks, such as Google or Twitter. However, focusing on the demise of Facebook — one of the most data rich, social networks in today’s economy — offers a fertile case study for the analysis of these critical legal and ethical questions.

Why and how could Facebook close down?

This article necessarily adopts a long-term perspective, responding to issues that could significantly harm society in the long run if we do not begin to address them today. As outlined in the introduction, Facebook is currently in robust health: aggregate user growth on the main platform is increasing, and it continues to be highly profitable, with annual revenue and income increasing year-over-year (Facebook, 2017; 2018). As such, it is unlikely that Facebook would shut down anytime soon. However, as anticipated, the rapidly changing socio-economic and regulatory landscape in which Facebook operates could lead to a reversal in its priorities and fortunes over the long term.

Facebook faces two major headwinds. First, the platform is coming under increasing pressure from regulators across the world (Gorwa, 2019). In particular, tighter data privacy regulation in various jurisdictions (notably, the EU General Data Protection Regulation [GDPR] 4 and the California Consumer Privacy Act [CCPA]) 5 could severely inhibit the company’s ability to collect and analyse user data. This in turn could significantly reduce the value of the Facebook platform to advertisers, who are drawn to its granular, data-driven insights about user behaviour and thus higher ad-to-sales conversion rates through targeted advertising. In turn, this would undermine Facebook’s existing business model, whereby advertising generates over 98.5% of Facebook’s revenue (Facebook, 2018), the vast majority of which on its main platform. More boldly, regulators in several countries are attempting to break up the company on antitrust grounds (Facebook, 2020, p. 64), which could lead, inter alia , to the reversal of its acquisitions of Instagram and WhatsApp — key assets, the loss of which could adversely affect Facebook’s future growth prospects.

Secondly, the longevity of the main Facebook platform is under threat from shifting social and social media trends. Regarding the latter, social media usage is gradually moving away from public, web-based platforms in favour of mobile-based messaging apps, particularly within younger demographics. Indeed, in more saturated markets, such as the US and Canada, Facebook’s penetration rate has declined (Facebook, 2020, pp. 31-33), particularly amongst teenagers who tend to favour mobile-only apps such as Snapchat, Instagram and TikTok (Piper Jaffray, 2020). Although Facebook and Instagram still have the largest share of the market in terms of time spent on social media, this has declined since 2015 in favour of Snapchat (Furman, 2019, p. 26). They also face growing competition from international players such as WeChat with over 1 billion users (Tencent, 2019), as well as social media apps with strong political leanings, such as Parler, which are growing in popularity. 6

A sustained movement of active users away from the main Facebook platform would inevitably impact the preferences of advertisers, who rely on active users to generate engagement for their clients. More broadly, Facebook’s business model is under threat from a growing social and political movement against the company’s perceived failure to remove misinformation and hateful content from its platform. The advertiser boycott in the wake of the Black Lives Matter protests highlights the commercial risks to Facebook of failing to respond adequately to the social justice concerns of its users and customers. 7 As we have seen in the context of both Facebook as well as precedents such as Friendster, due to reverse network effects, any such exodus of users and/or advertisers can occur suddenly and escalate rapidly (Garcia et al., 2013; Seki and Nakamura, 2016; Cannarella and Spechler, 2014).

Collectively, these socio-technical and regulatory developments may force Facebook to shift its strategic priorities away from being a public networking platform (and monetising user data through advertising on the platform), to a company focused on private, ephemeral messaging, monetised through commerce and payment transactions. Indeed, recent statements from Facebook point in this direction:

I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won't stick around forever. This is the future I hope we will help bring about. We plan to build this the way we've developed WhatsApp: focus on the most fundamental and private use case -- messaging -- make it as secure as possible, and then build more ways for people to interact on top of that.(Zuckerberg, 2019)

Of course, it does not automatically follow that Facebook would shut down its main platform, particularly if it still has sufficient active users remaining on it, and it bears little cost from keeping it open. On the other hand, closure becomes more likely once a sufficient number of active users and advertisers (but, importantly, not necessarily all) have also left the platform, especially in its most profitable regions. In this latter scenario, it is conceivable that Facebook would consider shutting down the main platform’s developer API (Application Programming Interface — the interface between Facebook and client software) instead of leaving it open and vulnerable to a security breach. Indeed, it was in similar circumstances that Google recently closed the consumer version of its social network Google+ (Thacker, 2018). 

In a more extreme scenario, Facebook Inc. could fail altogether and enter into a legal process such as corporate bankruptcy (insolvency): either a reorganisation that seeks to rescue the company as a going concern, typically by restructuring and selling off some of its assets; or liquidation, in which the company is wound down and dissolved entirely. Such a scenario, however, should be regarded as highly unlikely for the foreseeable future. Although we highlight some of the legal and ethical considerations arising from a Facebook insolvency scenario, the non-insolvent discontinuation or closure of the main platform shall be our main focus henceforth. It should be noted that, as a technical matter, this closure could take various forms. For example, Facebook could close the platform but preserve users’ profiles; alternatively, it could close the platform and destroy, or sell parts or all of its user data etc. Whilst our focus is on the ethical and legal consequences of Facebook’s closure at the aggregate level, we address technical variations in the specific form that this closure could take to the extent that it impacts upon our analysis. 

Key ethical stakeholders and potential harms

In this section, we identify four key ethical stakeholders who could be harmed 8 by Facebook’s closure. These stakeholders are: dependent communities, in particular the socio-economic and media ecosystems that depend on Facebook to flourish; existing users , (active and passive) individuals, as well as groups, whose data are collected, analysed and monetised by Facebook, and stored on the company’s servers; non-users , particularly deceased users whose data continues to be stored and used by Facebook, and who will represent hundreds of millions of Facebook profiles in only a few decades; and future generations, who may have a scientific interest in the Facebook archive as a historical resource and cultural heritage.

We refer to these categories as ethical stakeholders, rather than user types, because our categorisation is based on the unique types of harm that each would face in a Facebook closure, not their way of using the platform. That is, the categorisation is a tool to conduct our ethical analysis, rather than corresponding to some already existing groups of users. A single individual may for instance have mutually conflicting interests in her capacity as an existing Facebook user, a member of a dependent community, and as a future non-user. Thus, treating her as a single unit, or part of a particular user group, would reduce the ethical complexity of the analysis. As such, the interests of the stakeholders are by no means entirely compatible with one another, and there will unquestionably be conflicts of interest between them.

Furthermore, for the purposes of the present discussion, we do not intend to rank the relative value of the various interests; there is no internal priority to our analysis, although this may become an important question for future research. We also stress that our list is by no means exhaustive. Our focus is on the most significant ethical stakeholders who have an interest in Facebook’s closure and would experience unique harms due to the closure of a company that is both a global repository of personal data, and the world’s main communication and social networking infrastructure. As such, we exclude traditional, economic stakeholders from the analysis — such as employees, directors, shareholders and creditors. While these groups certainly have stakes in Facebook’s potential closure, there is nothing that significantly distinguishes their interests in the closure of a company like Facebook from the closure of any other (multinational) corporation. This also means that we exclude stakeholders that could benefit from Facebook’s closure, such as commercial competitors, or governments struggling with Facebook’s influence on elections and other democratic processes. Likewise, we refrain from assessing the relative overall (un)desirability of Facebook’s closure.

Dependent communities

The first key ethical stakeholders are the ‘dependent communities’, that is, communities and industries that have developed around the Facebook platform and now (semi-)depend on its existence to flourish. 9

Over the last decade, Facebook has become a critical economic engine and a key gateway to the internet as such (Digital Competition Expert Panel, 2019). The growing industry of digitally native content providers, from major news outlets such as Huffington Post and Buzzfeed, to small independent agencies, is sometimes entirely dependent on exposure through Facebook. For example, the most recent change in Facebook’s News Feed algorithm had devastating consequences for this part of the media industry — some news outlets allegedly lost over 50% of their traffic overnight (Nicholls et al., 2018, p. 15). If such a small change in its algorithms could lead to the economic disruption of an entire industry, the wholesale closure of the main Facebook platform would likely cause significant economic and societal damage on a global scale, particularly where it occurs rapidly and/or unexpectedly, such that news outlets and other dependent communities do not have sufficient time to migrate to other web platforms.

To be clear, our main concern here is not with the individual media outlets, but with communities that are dependent on a functioning Facebook-based media ecosystem. While the sudden closure of one, or even several media outlets may not pose a threat to this ecosystem, a sudden breakdown of the entire ecosystem would have severe consequences. For instance, many of the content providers reliant on exposure through Facebook are located in developing countries, in which Facebook has become almost synonymous with the internet, acting as the primary source of news (Mirani, 2015), amongst other functions. Given the primacy of the internet to public discourse in today’s world, it goes without saying that, for these communities, Facebook effectively is the digital public sphere, and hence a central part of the public sphere overall. A notable example is Laos, a country which has so recently been digitised, that its language (Lao) has not yet been properly indexed by Google (Kittikhoun, 2019). This lacuna is filled by Facebook, which has established itself not only as the main messaging service and social network in Laos, but effectively also as the web as such. 

The launch of Facebook’s Free Basics platform, which provides free access to Facebook services in less developed countries, has further increased the number of communities that depend solely on Facebook. According to the Free Basics website, 10 100 million people who would not otherwise have been connected are now using the services offered by the platform. As such, there are many areas and communities that now depend on Facebook in order to function and are thus susceptible to considerable harm were the platform to shut down. Note that this harm is not reducible to the individuals using free basics, but is a concern for the entire community, including members not using Facebook. As an illustrative example, consider the vital role played by Facebook and other social media platforms in disseminating information about and keeping many communities connected during the COVID-19 pandemic. In a time of crisis, communities with a large dependency on a single platform become particularly vulnerable.

Of course, whether the closure of Facebook’s main platform harms these communities depends on the reasons for closure and the manner in which it closes down (sudden death vs slow decline). If closure is accompanied by the voluntary exodus of these communities, for example to a different part of the Facebook Inc. group (e.g., Messenger or Instagram), or a third-party social network, they would arguably incur limited social or economic costs. Furthermore, it is entirely possible to imagine a scenario in which the main Facebook platform is shut down because it is unprofitable to the company as a whole, or does not align with the company’s strategic priorities, yet remains systemically important for a number of dependent communities. These communities could still use and depend on the platform however may simply not be valuable or lucrative enough for Facebook Inc. to justify keeping the platform open. Indeed, many of the dependent communities that we have described are located in regions of the world that are the least profitable for the company (certainly under an advertising-driven revenue model).

The question arises how these dependent communities should be protected in the event of Facebook’s demise. Indeed, existing legal frameworks governing Facebook do not make special provision for its systemically important functions. As such, we propose that a new concept of ‘systemically important technological institutions’ (‘SITIs’) — drawing on the concept of ‘systemically important financial institutions’ (‘SIFIs’) — be given more serious consideration in managing the life and death of global communications platforms, such as Facebook, that provide a critical societal infrastructure. This proposal is examined further in the second part of this article.

Existing users

‘Existing users’ refers broadly to any living person or group of people who uses or has used the main Facebook platform, and continues to maintain a Facebook profile or page. That is, both daily and monthly active users, as well as users who are not actively using the platform however still have a profile where their information is stored (including ‘de-activated’ profiles). Invariably, there is an overlap between this set of stakeholders and ‘dependent communities’: the latter includes the former. Our main focus here is on ethical harms that arise at the level of the individual user, by virtue of their individual profiles or group pages, rather than the systemic and societal harms outlined above. 

It is tempting to think that the harm to these users in the event of Facebook’s closure is limited to the loss of the value that they place on having access to Facebook’s services. However, this would be an incomplete conclusion. Everything a user does on the network is recorded and becomes part of Facebook’s data archive, which is where the true potential for harm lies. That is, the danger stems not only from losing access to the Facebook platform and the various services it offers, but from future harms that users (active and passive) are exposed to as they lose control over their personal data. Any violation of the trust that these users place in Facebook with respect to the use of their personal data threatens to compromise user privacy, dignity and self-identity (Floridi, 2011). Naturally, these threats also exist today. However, as long as the platform remains operational, users have a clear idea of who they can hold accountable for the processing of their data. Should the platform be forced to close, or worse still, sell off user data to a third party, this accountability will likely vanish.

The scope for harm to existing users upon Facebook’s closure depends on how Facebook continues to process user data. If the data are deleted (as occurred, for example, in the closure of Yahoo! Groups), 11 users could lose access to information — particularly, photos and conversations — that are part of their identity, personal history and memory. Although Facebook does allow users to download much of their intentionally provided data to a hard drive — in the EU, implementing the right to data portability 12 — this does not encompass users’ conversations and other forms of interactive data. For example, Facebook photos in which a user has been tagged, but which were uploaded by another user, are not portable, even though these photos arguably contain the first user’s personal data. Downloading data is also an impractical option for the hundreds of millions of users accessing the platform only via mobile devices (Datareportal, 2019) that lack adequate storage and processing capacity. Personal archiving is an increasingly constitutive part of a person’s sense of self, but, as noted by Acker and Brubaker (2014), there is a tension between how users conceive of their online personal archives, and the corporate, institutional reality of these archives.

On the other hand, it is highly plausible that Facebook would instead want to retain these data to train its machine learning models and to provide insights on users of other Facebook products, such as Instagram and Messenger. In this scenario, the risk to existing users is that they lose control over how their information is used, or at least fail to understand how and where it is being processed (especially where these users are not active on other Facebook products, such as Instagram). Naturally, involuntary user profiling is a major concern with Facebook as it stands. The difference in the case of closure is that many users will likely not even be aware of the possibility of being profiled. If Facebook goes down, these users would no longer be able to view their data, leading many to believe that it in fact is destroyed. Yet, a hypothetical user may for instance create an Instagram profile in 2030 and still be profiled by her lingering Facebook data, despite Facebook (the main platform) being long gone by then. Or worse still, her old Facebook data may be used to profile other users who are demographically similar to her, without her (let alone their) informed consent or knowledge.

Existing laws in the EU offer limited protection for users’ data in these scenarios. If Facebook intended to delete the data, under EU data protection law it would likely need to notify as well as seek the consent of users for the further processing of their data, 13 offering them the opportunity to retrieve their data before deletion (see the closure of Google+ 14 and Yahoo! Groups). On the other hand, if Facebook opted to retain and continue processing user data in order to provide the (other) services set out under its terms and conditions, it is unlikely that it would be legally required to obtain fresh consent from users — although, in reality, the company would likely still offer users the option to retrieve their data. Independently, users in the EU could also exercise their rights to data portability and erasure 15 to retrieve or delete their data.

In practice, however, the enforcement and realisation of these rights is challenging. Given that user data are commingled across the Facebook group of companies, and moreover have ‘velocity’ — an individual user’s data will likely have been repurposed and reused multiple times, together with the data of other users — it is unlikely that all of the data relating to an individual user can or will be identified and permanently ‘returned’. Likewise, given that user data are commingled, objection by an individual user to the transfer of their data is unlikely to be effective — their data will still be transferred with the data of other users who consent to the transfer. As previously mentioned, the data portability function currently offered by Facebook is also limited in scope.

Notwithstanding these practical challenges, a broader problem with the existing legal framework governing user data is that it is almost entirely focused on the rights of individual users. It offers little recognition or protection for the right of groups — for example, Facebook groups formed around sports, travel, music or other shared interests — and thus limited protection against group-level ethical harm within the Facebook platform (i.e., when the ethical patient is a multi-agent-system, not necessarily reducible to its individual parts [Floridi, 2012; Simon, 1995]).

This problem is further exacerbated by so called ‘ad hoc groups’ (i.e., groups that are formed only algorithmically [Mittelstadt, 2017]), which may not necessarily correspond to any organic communities. For example, ‘dog owners living in Wales aged 38–40 that exercise regularly’ (Mittelstadt 2017, p. 477) is a hypothetical, algorithmically formed group. Whereas many organically formed groups are already acknowledged by privacy and discrimination laws, or at least have the organisational means to defend their interests (e.g., people with a certain disability, sexual orientation etc.), ad hoc algorithmic groups often lack organisational means of resistance.

The third key ethical stakeholders are those who never, or no longer, use Facebook, yet are still susceptible to harms resulting from its demise. This category includes a range of disparate sub-groups, including individuals who do not have an account, but whose data Facebook nevertheless collects and tracks from apps or websites that embed its services (Hern, 2018). Facebook uses these data, inter alia , to target the individual with ads encouraging them to join the platform (Baser, 2018). Similarly, the non-user category includes individuals who may be tracked by proxy, for example by analysing data from their relatives or close network (more on this below). A third sub-group is minors who may feature in photos and other types of data uploaded to Facebook by their parents (so-called “sharenting”).

The most significant type of non-users, however, are deceased users, i.e., those who have used the platform in the past but have since passed away. Although this may currently seem a rather niche concern, the deceased user group is expected to grow rapidly over the next couple of decades. As shown by Öhman and Watson (2019), Facebook will soon host hundreds of millions of deceased profiles on their servers. 16 This sub-group is of special interest since, unlike living non-users who generally enjoy at least some legal rights to privacy and data protection (as outlined above), the deceased do not qualify for protection under existing data protection laws. 17 The lack of protection for deceased data subjects is a pressing concern even without Facebook closing. 18 Facebook does not have any legal obligation to seek their consent (nor that of their representatives) before deleting, or otherwise further processing, users’ data after death (although Denmark, Spain and Italy are exceptions). 19 Moreover, even if Facebook tried to seek the consent of their representatives, it would have a difficult time given that users do not always appoint a ‘legacy contact’ to represent them posthumously.

The closure of the platform, however, opens an entirely new level of ethical harm, particularly in the (unlikely but not impossible) case of bankruptcy or insolvency. Such a scenario would likely force Facebook to sell off its assets to the highest bidder. However, unlike the sale or transfer of data of living users, which under the GDPR and EU insolvency law requires users’ informed consent, there is no corresponding protection for the sale of deceased users’ data in insolvency, such as requiring the consent of their next of kin. 20 Moreover, there are no limitations on who could purchase these data and for what purposes. For example, a deceased person’s adversaries could acquire their Facebook data in order to compromise their privacy or tarnish their reputation posthumously. Incidents of this kind have already been reported on Twitter, where the profiles of deceased celebrities have been hacked and used to spread propaganda. 21  The profiles of deceased users may also remain commercially valuable and attractive to third party purchasers — for instance, by providing insights on living associates of the deceased, such as their friends and relatives. As in genealogy — where one individual’s DNA also contains information about their children, siblings and parents — one person’s data may similarly be used to predict another’s behaviour or dispositions (see Creet [2019] on the relationship between genealogy websites and big pharma).

In sum, the demise of a platform with Facebook’s global and societal significance is not only a concern for those who use, or have used it directly, but also for individuals who are indirectly affected by its omnipresence in society.

Future generations

It is also important to consider indirect harms arising from Facebook’s potential closure due to missed opportunities . The most important stakeholders to consider in this respect are future generations, which, much like deceased users, are seldom directly protected in law. By ‘future generations’ we refer mainly to future historians and sociologists studying the origins and dynamics of digital society, but also to the general public and their ability to access their shared digital cultural heritage.

It is widely accepted that the open web holds great cultural and historical value (Rosenzweig, 2003), and thus several organisations — perhaps most notably the Internet Archive’s Way Back Machine 22 — as well as researchers (Brügger and Schroeder, 2017) are working to preserve it. Personal data, however, have received less attention. Although (most) individual user data may be relatively inconsequential for historical, scientific and cultural purposes, the aggregate Facebook data archive amounts to a digital artefact of considerable significance. The personal digital heritage of each Facebook user is, or will become, part of our shared cultural digital heritage (Cameron and Kenderdine, 2007). As Varnado writes:

Many people save various things in digital format, and if they fail to alert others of and provide access to those things, certain memories and stories of their lives could be lost forever. This is a loss not only for a descendant’s legacy and successors but also for society as a whole. […] This is especially true of social networking accounts, which may be the principal—and eventually only—source for future generations to learn about their predecessors (Varnado, 2014, p. 744)

Not only is Facebook becoming a significant digital cultural artefact, it is arguably the first such artefact to have truly global proportions. Indeed, Facebook is by far the largest archive of human behaviour in history. As such, it can legitimately be said to hold what Appiah (2006) calls ‘cosmopolitan value’ — that is, something that is significant enough to be part of the narrative of our species. Given its global reach, and thus its interest to all of human kind (present and future), this record can even be thought of as a form of future public good (Waters, 2002, p. 83), without which we risk falling into a ‘digital dark age’ (Kuny, 1998; Smit et al., 2011) — a state of ignorance of our digital past.

The concentration of digital cultural heritage in a single (privately controlled and corporate) platform is in and of itself problematic, especially in view of the risk of Facebook monopolising private and collective history (Öhman and Watson, 2019). These socio-political concerns are magnified in the context of the platform’s demise. For such a scenario poses a threat not only to the control or appraisal of digital cultural heritage, but also to its very existence — by decompartmentalising the archive, thus destroying its global significance, and/or by destroying it entirely due to lack of commercial or other interest in preserving it.

These risks are most acute in an insolvency scenario, where, as discussed above, the data are more likely to be deleted or sold to third parties, including by being split up among a number of different data controllers. Although such an outcome may be viewed as a positive development in terms of decentralising Facebook’s power (Öhman and Watson, 2019), it also risks dividing and therefore diluting the global heritage and cosmopolitan value held within the platform. Worse still would be a scenario in which cosmopolitan value is destroyed due to a lack of, or divergent, commercial interests in purchasing Facebook’s data archives, or indeed the inability to put a price on these data due to the absence of agreed upon accounting rules over a company’s (big) data assets (Lyford-Smith, 2017). The recent auction of Cambridge Analytica’s assets in administration, where the highest bid received for the company’s business and intellectual property rights (assumed to include the personal data of Facebook users) was a mere £1, is a sobering illustration of these challenges. 23  

However, our concerns are not limited to an insolvency scenario. In the more plausible scenario of Facebook closing the shutters on one of its products, such as the main platform website and app, the archive assembled by the product would no longer be accessible as such to either the public or future generations, even though the data and insights would likely continue to exist and be utilised within the Facebook Inc. group of companies ( inter alia , to provide insights on users of other products such as Instagram and Messenger).

Recommendations

The stakeholders presented above, and the harms to which they are exposed, occupy the ethical landscape in which legal and policy measures to manage Facebook’s closure must be shaped. Although it is premature to propose definitive solutions, in this section we offer four broad recommendations for future policy and research in this area. These recommendations are by no means intended to be coherent solutions to “the” problem of big tech closure, but rather are posed as a starting point for further debate.

Develop a regulatory framework for Systemically Important Technological Institutions.

As examined earlier, many societies around the world have become ever-more dependent on digital communication and commerce through Big Tech platforms such as Facebook and would be harmed by their (disorderly) demise. Consider, for instance, the implications of a sudden breakdown of these platforms in times of crisis like the COVID-19 pandemic. As such, there are compelling reasons to regulate these platforms as systemically important institutions. By way of analogy to the SIFI concept — that is, domestic or global financial institutions and financial market infrastructures whose failure is anticipated to have adverse consequences for the rest of the financial system and the wider economy (FSB, 2014) — we thus propose that a new concept of systemically important technological institution, or ‘SITI’, be given more serious consideration. 

The regulatory framework for SITIs should draw on existing approaches to regulating SIFIs, critical national infrastructures and public utilities, respectively. In the insolvency context, drawing upon best practices for SIFI resolution, the SITI regime could include measures to fast-track insolvency proceedings in order to facilitate the orderly wind-down or reorganisation of a failing SITI in a way that minimises disruption to the (essential) services that it provides, thus mitigating harm to dependent communities. This might include resolution powers vested in a regulatory body authorised to supervise SITIs (this could be an existing body, such as the national competition or consumer protection/trade agency, or a newly established ‘Tech’ regulator) — including the power to mandate a SITI, such as Facebook, to continue to provide ‘essential services’ to dependent communities — for example, access to user groups or messaging apps — or else facilitate the transfer of these services to an alternative provider. 

In this way, SITIs would be subject to public obligations similar to those imposed on regulated public utilities, such as water and electricity companies — as “private companies that control infrastructural goods” (Rahman, 2018) — in order to prevent harm to dependent communities. 24 Likewise, the SITI regime should include obligations for failure planning (by way of analogy to ‘resolution and recovery planning’ under the SIFI regime). In the EU, this regime should also build on the regulatory framework for ‘essential services’, specifically essential ‘digital service providers’, under the EU NIS (Network and Information Systems) Directive, 25 which focuses on managing and mitigating cyber security risks to critical national infrastructures.

Whilst the fine print of the SITI regulatory regime requires further deliberation — indeed, the analogy with SIFIs and public utilities has evident limitations — we hope this article will help incite discussions to that end.

Strengthen the legal mechanisms for users to control their own data in cases of platform insolvency or closure.

Existing data protection laws are insufficient to protect Facebook users from the ethical harms that could arise from the handling of their data in the event of the platform’s closure. As we have highlighted, the nature of ‘Big Data’ is such that even if users object to the deletion or sale of their data, and request their return, Facebook would be unable as a practical matter to fully satisfy that request. As a result, users face ethical harm where their data is used against their will, in ways that could undermine their privacy, dignity and self-identity.

This calls for new data protection mechanisms that give Facebook users better control over their data. Potential solutions include creating new regulatory obligations for data controllers to segregate user data, in particular as between different Facebook subsidiaries (e.g., the main platform and Instagram), where data are currently commingled. 26 This would allow users to more effectively retrieve their data were Facebook to shut down and could offer a more effective way of protecting the interests of ad hoc ‘algorithmic’ groups (Mittelstadt, 2017). However, to the extent that segregating data in this way undermines the economies of scale that facilitate Big Data analysis, it could have the unintended effect of reducing the benefits that users gain from the Facebook platform, inter alia through personalised recommendations. 

Additionally, or alternatively, further consideration should be given to the concept of ‘data trusts’, as a bottom-up form of data governance and control by users (Delacroix & Lawrence, 2019). Under a data trust structure, Facebook would act as a trustee for user data, holding them on trust for the user(s) — as the settlor(s) and beneficiary(ies) of the trust — and managing and sharing the data in accordance with their instructions. Moreover, a plurality of trusts can be developed, for example, designed around specified groups of aggregated data (in order to leverage the economies of scope and scale of large, combined data sets). As a trustee, Facebook would be subject to a fiduciary duty to only use the data in ways that serve the best interests of the user (see further Balkin, 2016). As such, a data trust structure could provide a stronger legal mechanism for safeguarding the wishes of users with respect to their data as compared to the existing standard of ‘informed consent’. Another possible solution involves decentralising the ownership and control of user data, for example using distributed ledger technology. 27  

Strengthen legal protection for the data and privacy of deceased users.

Although the interests of non-users as a group need to be given serious consideration, we highlight the privacy of deceased users as an area in particular need of protection. We recommend that more countries follow the lead of Denmark in implementing legislation that, at least to some degree, protects the profiles of deceased users from being arbitrarily sold, mined and disseminated in the case of Facebook’s closure. 28 Such legislation could follow several different models. Perhaps the most intuitive option is to simply enshrine the privacy rights of deceased users in data protection law, such as (in the EU) the GDPR. This can either be designed as a personal (but time-limited) right (as in Denmark), or a right bestowed upon next of kin (as in Spain and Italy). It could also be shaped by extending copyright law protection (Harbinja, 2017) or take place within what Harbinja (2013, p. 20) calls a ‘human rights-based regime’, (see also Bergtora Sandvik, 2020), i.e. as a universal and inviolable right. Alternatively, it could be achieved by designating companies such as Facebook as ‘information fiduciaries’ (Balkin, 2016), pursuant to which they have a duty of care to act in the best interests of users with respect to their data, including posthumously.

The risk of ethical harm to deceased users or customers in the event of corporate demise is not limited to the closure of Facebook, or Big Tech (platforms). Although Facebook will likely be the single largest holder of deceased profiles in the 21 st century, other social networks (LinkedIn, WeChat, YouTube etc.) are also likely to host hundreds of millions of deceased profiles within only a few decades. And as more sectors of the economy become digitised, any company holding customer data will eventually hold a large volume of data relating to deceased subjects. As such, developing more robust legal protection for the data privacy rights of the deceased is important for mitigating the ethical harms due to corporate demise, broadly defined. 

However, for obvious reasons, deceased data subjects have little political influence, and are thus unlikely to become a top priority to policy makers. Moreover, any legislative measures to protect their privacy are likely to be adopted at national or regional levels first, although the problem inevitably remains global in nature. A satisfactory legislative response may therefore take significant time and political effort to develop. Facebook should therefore be encouraged to specify how they intend to handle deceased users’ data upon closure in their terms of service, and in particular commit not to sell those data to a third party where this would not be in the best interests of said users. While this private approach may not have the same effectiveness and general applicability as national or regional legislation protecting deceased user data, it would provide an important first step.

Create stronger incentives for Facebook to share insights and preserve historically significant data for future generations.

Future generations cannot directly safeguard their interests and thus it is incumbent on us to do so. Given the societal, historical and cultural interest in preserving, or at least averting the complete destruction of Facebook’s cultural heritage, stronger incentives need to be created for Facebook to take responsibility and begin acknowledging the global historical value of its data archives.

A promising strategy would be to protect Facebook’s archive as a site of digital global heritage, drawing inspiration from the protection of physical sites of global cultural heritage, such as through UNESCO World Heritage protected status. 29 Pursuant to Article 6.1 of the Convention Concerning the Protection of World Cultural and Natural Heritage (UNESCO, 1972), state parties acknowledge that, while respecting the sovereignty of the state territory, their national heritage may also constitute world heritage, which falls within the interests and duties of the ‘international community’ to preserve. Meanwhile, Article 4 stipulates that:

Each State Party to this Convention recognizes that the duty of ensuring the identification, protection, conservation, presentation and transmission to future generations of the cultural and natural heritage […] situated on its territory, belongs primarily to that State. It will do all it can to this end, to the utmost of its own resources and, where appropriate, with any international assistance and co-operation, in particular, financial, artistic, scientific and technical, which it may be able to obtain. (UNESCO, 1972, Art. 4)

A digital version of this label may similarly entail acknowledgement by data controllers of, and a pledge to preserve, the cosmopolitan value of their data archive, while allowing them to continue using the archive. However, in contrast to physical sites and material artefacts, which fall under the control of sovereign states, the most significant digital artefacts in today’s world are under the control of Big Tech companies, like Facebook. As such, there is reason to consider a new international agreement between corporate entities, in which they pledge to protect and conserve the global cultural heritage on their platforms. 30

However, bestowing the label of global digital heritage does not resolve the question of access to this heritage. Unlike Twitter, which in 2010 attempted to donate its entire archive to the Library of Congress, 31 Facebook’s archive arguably contains more sensitive, personal information about its users. Moreover, these data offer the company more of a competitive advantage compared to Twitter (the latter’s user accounts are public, in contrast to Facebook, where many of the profiles are visible only to friends of the user). These considerations could reduce Facebook’s readiness to grant public access to its archives. Nevertheless, safeguarding the existence of Facebook’s records and its historical significance remains an important first step in making it accessible to future generations.

It goes without saying that the interests of future generations will at times conflict with the interests of the other three ethical stakeholders we have identified. As Mazzone (2012, p. 1660) points out, ‘the societal interest in preserving postings to social networking sites for future historical study can be in tension with the privacy interests of individual users.’ Indeed, Facebook’s data are proprietary, and any interventions must respect its rights in the data as well as the privacy rights of users. Yet, the mere fact that there are conflicts of interests and complexities does not mean that the interests of future generations ought to be neglected altogether.

For the foreseeable future, Facebook’s demise remains a high risk, low probability event. However, mapping out the legal and ethical landscape for such an eventuality, as we have done in this article, allows society to better manage the fallout should this scenario materialise. Moreover, our analysis helps to shed light on lower risk but higher probability scenarios. Companies regularly fail and disappear — increasingly taking with them troves of customer-user data that receive only limited protection and attention under existing law. The legal and ethical harms that we have identified in this article, many of which flow from the use of data following Facebook’s closure, are thus equally relevant to the closure of other companies, albeit on a smaller scale. Regardless of which data-rich company is the next to go, we must make sure that an adequate governance framework is in place to minimise the systemic and individual damage. Our hope is that this article will help kickstart a debate and further research on these important issues.

Acknowledgements

We are deeply grateful to Luciano Floridi, David Watson, Josh Cowls, Robert Gorwa, Tim R Samples, and Horst Eidenmüller for valuable feedback and input. We would also like to add a special thanks to reviewers James Meese and Steph Hill, and editors Frédéric Dubois and Kris Erickson for encouraging us to further improve this manuscript.

Acker, A., & Brubaker, J. R. (2014). Death, memorialization, and social media: A platform perspective for personal archives. Archivaria , 77 , 2–23. https://archivaria.ca/index.php/archivaria/article/view/13469

Aguilar, A. (2015). The global economic impact of Facebook: Helping to unlock new opportunities [Report]. Deloitte. https://www2.deloitte.com/uk/en/pages/technology-media-and-telecommunications/articles/the-global-economic-impact-of-facebook.html

Aplin, T., Bentley, L., Johnson, P., & Malynicz, S. (2012). Gurry on breach of confidence: The protection of confidential information . Oxford University Press.

Appiah, K. A. (2006). Cosmopolitanism: Ethics in a world of strangers . Penguin.

Balkin, J. (2016). Information fiduciaries and the first amendment. UC Davis Law Review , 49 (4), 1183–1234. https://lawreview.law.ucdavis.edu/issues/49/4/Lecture/49-4_Balkin.pdf

Baser, D. (2018, April 16). Hard questions: What data does Facebook collect when I’m not using Facebook, and why? [Blog post]. Facebook Newsroom . https://newsroom.fb.com/news/2018/04/data-off-facebook/

Bergtora Sandvik, K. (2020). Digital dead body management (DDBM): Time to think it through. Journal of Human Rights Practice , uaa002 . https://doi.org/10.1093/jhuman/huaa002

boyd, d. (2013). White flight in networked publics? How race and class shaped american teen engagement with MySpace and facebook. In L. Nakamura & P. Chow-White (Eds.), Race after the internet .

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian . https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Cannarella, J., & Spechler, J. (2014). Epidemiological Modelling of Online Social Network Dynamics. ArXiv . https://arxiv.org/pdf/1401.4208.pdf

Competition & Markets Authority. (2020). Online Platforms and Digital Advertising (Market Study) [Final report]. Competition & Markets Authority. https://assets.publishing.service.gov.uk/media/5efc57ed3a6f4023d242ed56/Final_report_1_July_2020_.pdf

Creet, J. (2019). Data mining the deceased: Ancestry and the business of family [Documentary]. https://juliacreet.vhx.tv/

DataReportal. (2019). Global digital overview . https://datareportal.com/?utm_source=Statista&utm_medium=Data_Citation_Hyperlink&utm_campaign=Data_Partners&utm_content=Statista_Data_Citation

Delacroix, S., & Lawrence, N. D. (2019). Disturbing the ‘One size fits all’ approach to data governance: Bottom-up. International Data Privacy Law , 9 (4), 236–252. https://doi.org/10.1093/idpl/ipz014

Di Cosmo, R., & Zacchiroli, S. (2017). Software heritage: Why and how to preserve software source code. iPRES 2017 – 14th international conference on digital preservation . 1–10.

F, C., & S, K. (2007). Theorizing digital cultural heritage: A critical discourse . MIT Press.

Facebook. (2017). Form 10-K annual report for the Fiscal Period ended December 31, 2017 .

Facebook. (2018). Form 10-K annual report for the fiscal period ended december 31, 2018 .

Facebook. (2019, June 18). Coming in 2020: Calibra [Blog post]. Facebook Newsroom . https://about.fb.com/news/2019/06/coming-in-2020-calibra/

Facebook. (2020). Form 10-Q quarterly report for the quarterly period ended March 31, 2020 .

Federal Trade Commission. (2019, July 24). FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook [Press Release]. News & Events . https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions

Financial Stability Board. (2014). Key attributes of effective resolution regimes for financial institutions ¡ . https://www.fsb.org/wp-content/uploads/r_141015.pdf

Floridi, L. (2011). The informational nature of personal identity. Minds and Machines , 21 (4), 549–566. https://doi.org/10.1007/s11023-011-9259-6

Floridi, L. (2012). Distributed morality in an information society. Science and Engineering Ethics , 19 (3), 727–743. https://doi.org/10.1007/s11948-012-9413-4

Furman, J. (2019). Unlocking digital competition [Report]. Digital Competition Expert Panel. https://www.gov.uk/government/publications/unlocking-digital-competition-report-of-the-digital-competition-expert-panel

Garcia, D., Mavrodiev, P., & Schweitzer, F. (2013). Social resilience in online communities: The autopsy of Friendster. Proceedings of the First ACM Conference on Online Social Networks (COSN ’13) . https://doi.org/10.1145/2512938.2512946.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society , 22 (6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914

Harbinja, E. (2013). Does the EU data protection regime protect post-mortem privacy and what could be the potential alternatives? Scripted , 10 (1). https://doi.org/10.2966/scrip.100113.19

Harbinja, E. (2014). Virtual worlds—A legal post-mortem account. Scripted , 11 (3). https://doi.org/10.2966/scrip.110314.273

Harbinja, E. (2017). Post-mortem privacy 2.0: Theory, law, and technology. International Review of Law, Computers & Technology , 31 (1), 26–42. https://doi.org/10.1080/13600869.2017.1275116

Howard, P. N., & Hussain, M. M. (2013). Democracy’s fourth wave? Digital media and the arab spring . Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199936953.001.0001

Information Commissioner’s Office. (2019, October). Statement on an agreement reached between Facebook and the ICO [Statement]. News and Events . https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/10/statement-on-an-agreement-reached-between-facebook-and-the-ico

Kittikhoun, A. (2019). Mapping the extent of Facebook’s role in the online media landscape of Laos [Master’s dissertation.]. University of Oxford, Oxford Internet Institute.

Kuny, T. (1998). A digital dark ages? Challenges in the preservation of electronic information. International Preservation News , 17 (May), 8–13. https://doi.org/Article

Lyford-Smith, D. (2017). Data as an Asset. ICAEW ¡ . https://www.icaew.com/technical/technology/data/data-analytics-and-big-data/data-analytics-articles/data-as-an-asset

Marcus, D. (2020, May). Welcome to Novi [Blog post]. Facebook Newsroom . https://about.fb.com/news/2020/05/welcome-to-novi/

Mazzone, J. (2012). Facebook’s afterlife. North Carolina Law Review , 90 (5), 1643–1685.

Mirani, L. (2015). Millions of Facebook users have no idea they’re using the internet. Quartz . https://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet/

M.I.T. (2013). An autopsy of a dead social network ¡ . https://www.technologyreview.com/s/511846/an-autopsy-of-a-dead-social-network/

Mittelstadt, B. (2017). From Individual to Group Privacy in Big Data Analytics. Philos. Technol , 30 , 475–494. https://doi.org/10.1007/s13347-017-0253-7

N, B., & R, S. (Eds.). (2017). The web as history: Using web archives to understand the past and the present . UCL Press.

Öhman, C., & Floridi, L. (2018). An ethical framework for the digital afterlife industry. Nature Human Behaviour . https://doi.org/10.1038/s41562-018-0335-2

Öhman, C. J., & Watson, D. (2019). Are the dead taking over Facebook? A Big Data approach to the future of death online. Big Data & Society , 6 (1), 205395171984254. https://doi.org/10.1177/2053951719842540

Open Data Institute. (2018, July 10). What is a Data Trust? [Blog post]. Knowledge & opinion blog . https://theodi.org/article/what-is-a-data-trust/#1527168424801-0db7e063-ed2a62d2-2d92

Piper Sandler. (2020). Taking stock with teens, spring 2020 survey . Piper Sandler. http://www.pipersandler.com/3col.aspx?id=5956

Piskorski, M. J., & Knoop, C.-I. (2006). Friendster (A) [Case Study]. Harvard Business Review.

Rahman, K. S. (2018). The new utilities: Private power, social infrastructure, and the revival of the public utility concept. Cardozo Law Review , 39 (5), 1621–1689. http://cardozolawreview.com/wp-content/uploads/2018/07/RAHMAN.39.5.2.pdf

Rosenzweig, R. (2003). Scarcity or abundance? Preserving the past in a digital era. The American Historical Review , 108 (3), 735–762. https://doi.org/10.1086/ahr/108.3.735

Scarre, G. (2013). Privacy and the dead. Philosophy in the Contemporary World , 19 (1), 1–16. https://doi.org/10.1063/1.2756072

Seki, K., & Nakamura, M. (2016). The collapse of the Friendster network started from the center of the core. 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) , 477–484. https://doi.org/10.1109/ASONAM.2016.7752278

Simon, T. W. (1995). Group harm. Journal of Social Philosophy , 26 (3), 123–138. https://doi.org/10.1111/j.1467-9833.1995.tb00089.x

Smit, E., Hoeven, J., & Giaretta, D. (2011). Avoiding a digital dark age for data: Why publishers should care about digital preservation. Learned Publishing , 24 (1), 35–49. https://doi.org/10.1087/20110107

Stokes, P. (2015). Deletion as second death: The moral status of digital remains. Ethics and Information Technology , 17 (4), 1–12. https://doi.org/10.1007/s10676-015-9379-4

Taylor, J. S. (2005). The myth of posthumous harm. American Philosophical Quarterly , 42 (4), 311–322. https://www.jstor.org/stable/20010214

Tencent. (2019). Q2 earnings release and interim results for the period ended June 30, 2019 .

Thacker, D. (2018, December 10). Expediting Changes to Google+ [Blog post]. Google . https://blog.google/technology/safety-security/expediting-changes-google-plus/

Torkjazi, M., Rejaie, R., & Willinger, W. (2009). Hot today, gone tomorrow: On the migration of MySpace users. Proceedings of the 2nd ACM Workshop on Online Social Networks - WOSN ’09 , 43. https://doi.org/10.1145/1592665.1592676

U. K. Government. (2019). Online harms [White Paper]. U.K. Government, Department for Digital, Culture, Media & Sport; Home Department. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

UNESCO. (1972). Convention concerning the Protection of the World Cultural and Natural Heritage. Adopted by the General Conference at its seventeenth session Paris, November 16.

Varnado, A. S. S. (2014). Your digital footprint left behind at death: An illustration of technology leaving the law behind. Louisiana Law Review , 74 (3), 719–775. https://digitalcommons.law.lsu.edu/lalrev/vol74/iss3/7

Warren, E. (2019). Here’s How We Can Break Up Big Tech [Medium Post]. Team Warren . https://medium.com/@teamwarren/heres-how-we-can-break-up-big-tech-9ad9e0da324c

Waters, D. (2002). Good archives make good scholars: Reflections on recent steps toward the archiving of digital information. In The state of digital preservation: An international perspective (pp. 78–95). Council on Library and Information Resources. https://www.clir.org/pubs/reports/pub107/waters/

York, C., & Turcotte, J. (2015). Vacationing from facebook: Adoption, temporary discontinuance, and readoption of an innovation. Communication Research Reports , 32 (1), 54–62. https://doi.org/10.1080/08824096.2014.989975

Zuckerberg, M. (2019, March 6). A privacy-focused vision for social networking [Post]. https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/

1. Unless otherwise stated, references to ‘Facebook’ are to the main platform (comprising News Feed, Groups and Pages, inter alia , both on the mobile app as well as the website), and do not include the wider group of companies that comprise Facebook Inc, namely WhatsApp, Messenger, Instagram, Oculus (Facebook, 2018), and Calibra (recently rebranded as Novi Financial) (Marcus, 2019; 2020).

2. See https://www.washingtonpost.com/news/the-intersect/wp/2015/02/12/8-throwback-sites-you-thought-died-in-2005-but-are-actually-still-around/

3. See https://qz.com/1408120/yahoo-japan-is-shutting-down-its-website-hosting-service-geocities/

4. Regulation (EU) 2016/679 < https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG> .

5. California Legislature Assembly Bill No. 375 < https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375 >

6. See < https://www.politico.com/news/2020/07/06/trump-parler-rules-349434 >

7. See < https://www.nytimes.com/2020/06/29/business/dealbook/facebook-boycott-ads.html >.

8. We adopt an inclusive definition of ethical harm (henceforth just ‘harm’) as any encroachment upon personal or collective and legitimate interests such as dignity, privacy, personal welfare, and freedom.  

9. Naturally, not all communities with a Facebook presence can be included in this category. For example, the lost marketing opportunities for large multinational corporations such as Coca Cola Inc., due to the sudden demise of Facebook, cannot be equated with the harm to a small-scale collective of sole traders in a remote area (e.g., a local craft or farmers’ market) whose only exposure to customers is through the platform. By ‘dependent communities’ we thus refer only to communities whose ability to flourish and survive may be threatened by Facebook’s sudden demise.

10. See https://info.internet.org/en/impact/

11. See https://help.yahoo.com/kb/understand-data-downloaded-yahoo-groups-sln35066.html

12. See Art 20 GDPR. 

13. See Art 4(2) GDPR (defining ‘processing’ to include, inter alia , ‘erasure or destruction’ of personal data).

14. See Google Help, (2019) ‘Shutting down Google+ for consumer (personal) accounts on April 2, 2019’ https://support.google.com/plus/answer/9195133?hl=en-GB . Facebook states in its data policy that ‘We store data until it is no longer necessary to provide our services and Facebook Products or until your account is deleted — whichever comes first’, which might suggest that users provide their consent to future deletion of their data when they first sign up to Facebook. However, it is unlikely that this clause substitutes for the requirement to obtain specific and unambiguous consent to data processing, for specific purposes — including deletion of data — under the GDPR (see Articles 4(11) and 6(1)(a)).

15. See Art 17 GDPR.

16. Facebook’s policy on deceased users has changed somewhat over the years, but the current approach is to allow next of kin to either memorialise or permanently delete the account of a confirmed deceased user (Facebook, n.d.). Users are also encouraged to select a ‘legacy contact’, that is, a second Facebook user who will act as a custodian in the event of their demise. Although these technical solutions have proven to be successful on an individual, short-term level, several long-term problems remain unsolved. In particular, what happens when the legacy contact themselves dies? For how long will it be economically viable to store hundreds of millions of deceased profiles on the servers?

17. However, note that the information of a deceased subject can continue to be protected by the right to privacy under Art 8 of the European Convention on Human Rights, and the common law of confidence with respect to confidential personal information (although the latter is unlikely to apply to data processing by Facebook) (see generally Aplin et al., 2012).

18.  Several philosophers and legal scholars have recently argued for the concept of posthumous privacy to be recognised (see Scarre [2014, p. 1], Stokes [2015] and Öhman & Floridi [2018]). 

19.  Recital 27 of the GDPR clearly states that ‘[t]his Regulation does not apply to the personal data of deceased persons’, however does at the same time allow member states to make additional provision for this purpose. Accordingly, a few European countries have included privacy rights for deceased data subjects in their implementing laws (for instance, Denmark, Spain and Italy — see https://www.twobirds.com/en/in-focus/general-data-protection-regulation/gdpr-tracker/deceased-persons .) However, aside from these limited cases, existing data protection for the deceased is alarmingly sparse across the world. 

20. Under EU insolvency law, any processing of personal data (for example, deletion, sale or transfer of the data to a third party purchaser) must comply with the GDPR (See Art 78 (Data Protection) of EU Regulation 2015/848 on Insolvency Proceedings (recast). However, see endnote 17 with regard to the right to privacy and confidentiality.

21.  See https://www.alaraby.co.uk/english/indepth/2019/2/25/saudi-trolls-hacking-dead-peoples-twitter-to-spread-propaganda

22.  See https://archive.org/web/

23. See Administrator’s Progress Report (2018) https://beta.companieshouse.gov.uk/company/09375920/filing-history . However, consumer data (for example, in the form of customer loyalty schemes) has been valued more highly in other corporate insolvencies (see for example, the Chapter 11 reorganisation of the Caesar’s Entertainment Group https://digital.hbs.edu/platform-digit/submission/caesars-entertainment-what-happens-in-vegas-ends-up-in-a-1billion-database/ ).

24. There is a broader call, from a competition (antitrust) policy perspective, to regulate Big Tech platforms as utilities on the basis that these platforms tend towards natural monopoly (see, e.g. Warren, 2019). Relatedly, the UK Competition and Markets Authority has recommended a new ‘pro-competition regulatory regime’ for digital platforms, such as Google and Facebook, that have ‘strategic market status’ (Furman, 2019; CMA, 2020). The measures proposed under this regime — such as facilitating interoperability between social media platforms— would also help to mitigate the potential harms to Facebook’s ethical stakeholders due to its closure.

25. Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union OJ L 194, 19.7.2016.

26.  Facebook has stated that financial data collected by Calibra/Novi, the digital wallet for Libra cryptocurrency, will not be shared with Facebook or third parties without user consent (Facebook 2019b). The segregation of user data is the subject of a ruling by the German Competition Authority, however this was overturned on appeal by Facebook (and is now being appealed by the competition authority — the original decision is here: https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2019/07_02_2019_Facebook.html ).

27. A related imperative is to clarify the financial accounting rules for the valuation of (Big) data assets, including in an insolvency context.

28. See s 2(5) of the Danish Data Protection Act 2018 < https://www.datatilsynet.dk/media/7753/danish-data-protection-act.pdf >

29. UNESCO has previously initiated a project to preserve source code (see Di Cosmo R and Zacchiroli, 2017).

30.  This could be formal or informal, for example in the vein of the ‘Giving Pledge’ — a philanthropic initiative to encourage billionaires to give away the majority of their wealth in their lifetimes (see < https://givingpledge.org/> ).

31.  Although the initiative has ceased to operate as originally planned, it remains one of the best examples of large scale social media archiving (see https://www.npr.org/sections/thetwo-way/2017/12/26/573609499/library-of-congress-will-no-longer-archive-every-tweet ). 

BG Davis A large bibliography and

A large bibliography and copious footnotes fail to obscure the extremely speculative nature of this paper. One is reminded of Fukuyama's absurd forecasts in his "End of History" essay. This sort of paper is wonderful for enhancing the publication list of the authors. However, it contributes little to the sum total of useful or applicable knowledge. In reality, no economy, nation or community is dependent on Facebook. Facebook is utterly non-essential. Its disappearance would cause some disruption to some limited groups and sectors for a limited period of time.

STEWART PEARSON This is an important topic

This is an important topic but the paper lacks rigor and perpetuates myths. (1) The issue of data ownership is urgent not because FB might cease to exist, but because it is a violation of human rights. (2) The data stored by FB is not an essential archive, but represents a biased and polarized profile of an unrepresentative segment. (3) The value of its data is real, but only because current, and will rapidly evaporate.(4) FB's attempts to show economic value are not proven, but the decline of journalism and start-ups under the FB and Google duopoly is a fact. (5) FB cannot even prove value to the advertisers who fund 98% of its revenue; but again that the duopoly has extracted value from leading brands, from Coca-Cola down to SMB's, is another fact.

B.R. Although, for some unknown

Although, for some unknown reasons, the nature of the connection between Facebook and the Oxford Internet Institute isn't clearly stated here, the reader might be interested to learn that the Institute receives annual funding from Facebook. This can be verified by visiting the website of the OII (see about > Giving to the OII web page). Facebook also directly provides millions of funding to other research projects at Oxford.

This should be kept in mind while reading the above recommendations and the speculative depiction of the demise of Facebook.

It is also interesting to note that the dramatic and less nuanced characterisation of a Facebook 'collapse' by mainstream news articles (e.g. The Times or the Telegraph) referencing this paper have been retweeted by the OII. It should also be noted that, contrary to what is being described in the paper as one of the major factors of a possible dramatic closure or damaging transformation of the Facebook social platform, the regulatory pressure and potential anti-trust measures pushed by governments, such as a mandatory break up of facebook Inc. operations and services would not necessarily lead to such consequences. This is only one possible scenario. We are indeed in the realm of speculations here and caution about causal likelihood are required; at least if impartiality is a genuine concern.

Add new comment

Copy to clipboard

Adjusts contrasts, text, and spacing in order to improve legibility for people with dyslexia.

Contrasts, text, and spacing are adjusted in order to improve legibility for people with dyslexia. Also, links look like this and italics like this . Font is changed to Atkinson Hyperlegible

Is this feature helpful for you, or could the design be improved? If you have feedback please send us a message .

  • Social media
  • Data protection
  • Digital ethics

Related Articles

Making sense of data ethics. the powers behind the data ethics debate in european policymaking.

Data ethics has gained traction in policy-making. The article presents an analytical investigation of the different dimensions and actors shaping data ethics in European policy-making.

Big data and democracy: a regulator’s perspective

This commentary is part of Data-driven elections , a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon. Introduction: all roads lead to Victoria, British Columbia As the Information and Privacy Commissioner for British Columbia, I am entrusted with enforcing the province’s two pieces of privacy legislation –

Data governance: a forum on Europe’s specificity (or the need for one)

The European Data Governance Forum taking place this week galvanised two core ethical principles, reports Francesca Musiani.

Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad

This paper examines data and privacy governance by four China-based mobile applications and their international versions - including the role of the state. It also highlights the role of platforms in gatekeeping mobile app privacy standards.

Online privacy concerns and legal assurance: A user perspective

Do users care about privacy? And if so: Will legal assurances help? Dr. Hanna Krasnova and Paula Kift summarize the findings of their quantitative study recently conducted among German students.

Internet Policy Review is an open access and peer-reviewed journal on internet regulation.

peer reviewed

Not peer reviewed.

SCImago Journal & Country Rank

email Subscribe NEWSLETTER

is facebook's business model ethical by design

is facebook's business model ethical by design

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

is facebook's business model ethical by design

Ethical design involves much more than the General Data Protection Regulation (GDPR). A lot of people have realized that big organizations like Facebook should act more responsibly. Otherwise, they can face consequences in court due to the way they treat user’s data. But we should go further!

Originally published by  UX Studio.

Ethical design  >  GDRP compliance

“When a company like Facebook improves the experience of its products, it’s like the massages we give to Kobe beef: they’re not for the benefit of the cow but to make the cow a better product. In this analogy, you are the cow.” –  Aral Balkan, ethical designer, founder of Ind.ie

This dire quote speaks about the misaligned values product creators and their customers pursue. It happens because businesses can under-represent user values against the myriad of ways they monetize their attention and data.

Many different perspectives are involved in the complex topic of ethical design. Aran Balkan published the  Ethical Design Manifesto  centered around a pyramid similar to Maslow’s. Instead of human needs, he focuses on the design and what is needed to make it ethical. 

 Ethical Design

Respecting rights should be a fundamental feature of any product, so it is reasonable to start there. There is a long way to go for GDPR in that section of the pyramid. Nevertheless,  we should also discuss the top two sections, as digital products rapidly devour our time and attention.

The context of ethical design

One of the largest organizations fighting for a better digital world, the  Center for Humane Tech  says:

“What began as a race to monetize our attention is now eroding the pillars of our society: mental health, democracy, social relationships, and our children.”  

Wow, heavy… According to the Center, we have come to a possible turning point in how we approach product design.

In order for that to happen in a preferable way, they outline four tasks:

  • Inspiring companies
  • Applying political pressure
  • Creating a cultural awakening
  • Engaging employees

Read more about the way forward  here .

The purpose of this article

We at,  UX studio,  care deeply about the world around us, and the people in it. Our vision is to create products that satisfy client and user needs, but we also strive to do this in an ethical way. This article seeks to contribute to two of those points.

First,  engage employees . You likely work close to product development in some sort of tech company. We would like you to advocate for ethical design decisions and non-extraction based business models.

Second, we want to contribute towards a  cultural awakening . Like the GDPR has, we can raise awareness. People can make changes and stop our digital products from abusing our most vulnerable human instincts.

Next, follow some practical examples focusing on  Facebook , the platform on everybody’s mind to move the discussion from the abstract towards a real and relatable one.

These examples apply to Facebook but many would carry over to lots of other social platforms as well. We do not want to anger the design gods in California. We know a robust UX process and extremely conscious decision making based on complex and well-researched information form their decision. We also understand that Facebook, after all, is a profit-driven enterprise.

Nevertheless, we won’t avoid critical ideas that can help us better understand where we stand and where we are going as an industry.

Respecting human time and effort

The regulations on visual merchandising in the  tobacco industry  set a good example from a different industry.

 Ethical Design

As many smokers can relate, big bold texts about the harmful effects, the ugly pictures obligatory in most European countries, and the removal of excessive branding can get very annoying. They have an aesthetically unpleasant effect and generally diminish the experience of smoking one way or another. Think of the below examples as a self-imposed digital version, based on the same principle.

Applying these ethical design ideas would create that discomfort, not in the users but in the companies that “consume” user data, time, and attention while relying too heavily on extraction-based business models. In turn, it would force these companies to find new ways to satisfy users.

 Ethical Design

Profit versus ethics

The above-described utopian situation premises the users becoming much more conscious about their rights and possibilities. Besides, companies try hard to serve their increasingly conscious users in a social context where the real price for ethically poor design in products and services is very well understood. 

Here, people would better understand the real price they pay for products and services of ethically poor design. This resembles the realization of product-makers that users would just turn off notifications altogether if bombarded with unnecessary notifications. 

These self-regulations should make financial sense, after all. After seeing the 2018  WWDC  in June, with all the attention paid to promoting focus, conscious usage, and other related values, this utopia might lie closer than you think.

Example: Facebook’s  ethical redesign?

Facebook can serve as an example of how simple features could make a big difference and the ease (or difficulty) in applying ethical design principles even on something as ethically loaded as a social media network. 

Here we present some mockups we created at  UX studio , the reasoning why they are not occurring yet, and why they should. They should facilitate a conversation about the larger context of ethical design as well as some specifics. Here are some of our ideas:

  • Time spent indicator and usage data
  • Newsfeed filtering options
  • Killing the infinite scroll
  • Why I see what I see
  • Raising saved item prominence
  • Notification grouping (system and human)
  • Notification mute scheduling / “mutification” / notification bundles
  • “Chetics” – ethical messaging features

1. “Time spent” indicator and usage data

One of Facebook’s biggest traps is that it “forces” time waste. 

The below example could fall into the second section of the Ethical Design Pyramid, “respect human effort”. It helps make users aware of how they use the core product and directs them towards the functionalities of actual value to them.

Our suggestion, based on ethical design principles:

One idea approaches this problem placing a counter underneath the profile picture and name in the top left corner. It shows the time spent actively browsing in the last 12 or 24 hours.

That’s all! OK, also if clicked on, an overlay window could open to give a more detailed look at statistics and general knowledge about time spent scrolling and browsing. While it should feature hiding the counter, it would helpfully remind people how much time they unintentionally spend.

 Ethical Design

Mockup of the “spent time” feature created by UX studio

2. Newsfeed filtering options

This feature came up very often with the brainstorming team at  UX studio . Facebook seems to have very consciously chosen not to include direct filtering options for the newsfeed.

In a  2013 FB post , one of their researchers wrote that they identified “clutter” as the main problem. She rightly mentions,

“ Stopping at literal interpretations is one of the easiest ways to end up with a product that fails to benefit the people whom it’s built for .”

So, they unwrapped the meaning of clutter and realized it mainly related to content and not visuals. However, they never truly implemented the obvious solution mentioned as “multiple feeds” in the post above. Controlling newsfeed content has never become directly available.

The “news feed preferences” menu item lies hidden in the top right dropdown menu with extremely limited options. It doesn’t really follow user needs other than just giving a false sense of agency.

Some simple filtering options at the top of the feed would obviously solve things. They could select for activity from all your friends or just a close circle. They could show pages promoting products and services or pages sharing news related content or entertainment. 

This ability would empower users to spend their time more efficiently and consciously. It would also likely improve a lot at the platform’s addictiveness ground zero, the news feed. It might also give support to users who feel more in control also feel safer, thus more likely to share and spend time.

 Ethical Design

Mockup of the news feed filtering feature created by UX studio

3. Killing the infinite scroll

The research  has focused on the difference between an infinite scrolling feed and one that requires action like pressing a “load more” or “next page” button.

A well-known example features the  bottomless soup bowl  experiment by Cornell professor Brian Wansik. Participants took in 73% more calories when the bowl refilled itself. Replication tested against servers refilling versus a self-refilling bowl and the difference remained significant. The empty bowl represents a stopping cue, when the mind has to wake up and ask, “Do I really want more?”

We suggest a “Load more” button at the bottom of the feed (news or profile) which would also allow Facebook to display content in the footer area, a positive.

Also, a text underneath the “Load more” button could provide information in an unobstructive way about the amount of content consumed and how much more comes with clicking.

 Ethical Design

Mockup of the “Load more” button and the post counter, created by UX studio

4. Why I see what I see

The order and selection of posts displayed on the news feed is getting ever more confusing. Users remembering they just recently saw a post from a page they scrolled past find themselves scrolling even more when visiting the page because the post they believed to be the last actually comes from way back in the past.

Intermittent variable rewards represent an important similarity between these platforms and slot machines. Assuming we all like cake (Fact: We all do), if a different cake appeared every time the fridge opened, I’d open it constantly with great excitement. On the other hand, if I found the same cake every time, I’d open it only when I wanted to eat cake, and with no excitement whatsoever.

The intermittent variable reward, makes us open these apps much more frequently than we realize. It relates to many aspects of how companies design them. They certainly don’t stop at the content selection in the news feed but it provides a good example as people do not think about it much.

Although I certainly understand that the algorithm’s complexity and robustness complicate rendering it transparent for the users (…do I really?), they exert  absolutely no  effort making it more understandable or predictable. That may feed into our gambling instinct.

We suggest signaling on the post the reason for its appearance in at least broad terms. Like the popular Netflix “Because you watched  this”  category, but this gets more specific. 

Categories could include but need not stop at:

  • you like this hobby / that activity / this politician / lot of these kinds of pages / etc.”
  • you friended SOMEONE”
  • you went here / did that / liked this / tried that / etc.”
  • your network likes it”
  • it’s trending now”
  • it’s important”
  • of whatever other reason I cannot know  because the algorithm is  “surprisingly inelegant, maddeningly mercurial, and stubbornly opaque.”

As a matter of fact, I would also like to know what I don’t see and why! But let’s not go there just now.

 Ethical Design

Mockup of the “Why am I seeing this?” section created by UX studio

5. Raising saved item prominence

Content on Facebook virtually never ends and constantly changes, not necessarily a bad thing. It presents several challenges if we think about how to find content we value.

Saved items already exist on the platform so what is preventing this big idea? For one, it doesn’t support offline saving.  One opinion posits  it would make advertisements on external sites useless, possibly triggering a chain reaction for those sites to stop paying for sponsored advertisements. Facebook thus chose not to implement this feature that was developed and tested otherwise, so the issue cannot lie with technicalities.

It is more likely to involve pushing users towards scrolling on the newsfeed without aiming to help them find the desired content. At the same time, it keeps them disoriented from their original goal as much as possible so they remain more malleable to the content getting displayed.

Right now “Saved” on the desktop resides in the section on the left called “Explore” and the option to save something hides in the top right corner of a post in a drop-down. On mobile, it lies under the hamburger menu in the bottom right corner.

How could saved items compete visually with the kaleidoscope of fresh content? They don’t have to bear the same visual weight of course (business is business) but plenty of available space lies right next to the newsfeed which could display more than a single link with an icon.

Thumbnails with short titles could help remind us and spark that original interest that caused us to save it. Like the section on the profile page with nine friends, it could also pop up over there.

 Ethical Design

Mockup of the upgraded Saved Items section, created by UX studio

6. Notification management

a) Notification grouping

Notification handling also notoriously features overly much in discussions of ethical design. Remember intermittent variable reward and how your phone resembles a slot machine? Well, this feature is one of the slot-mechanization flagships on your phone.

Notifications can come from several different sources in different apps imparting varying emotional rewards. The simple act of looking at the phone hooks with an emotional gamble.

When it arrives, you never know if you care (your crush commenting under your 5-year-old profile picture) or not (a status update from an old high school classmate you never cared about in the first place).

 Ethical Design

Mockup of notification grouping, created by UX studio

b) Notification mute scheduling and bundles

The option to modify where you get your notifications from and how already features here but not really helpful. Mostly it only mutes notifications from the platform altogether in the operating system or does little about it. Neither does it do much good for Facebook.

Muting notification on an OS-level obviously reduces the level of engagement. Bombarding user attention if they “don’t do much about it” also does no good in the utopia of conscious users we imagine now, as those users will jump on the first possible substitute that treats users better.

 Ethical Design

Mockup of the notification muting feature, created by UX studio

We suggest muting notifications, scheduling notifications bundles, muting, and notification groups.

7. “Chathics”

Remember what “available”, “away”, “busy”, “do not disturb” and “incognito mode” used to mean? Pepperidge Farm remembers.

What we don’t remember is when these features disappeared from most messaging platforms, so worse probably could have happened. Nevertheless, that doesn’t mean Messenger has no room for improvement.

Slack proves itself the top student in this regard. This stems from the difference in their target audience. Messenger has an incredibly diverse user base with  1.3 billion users worldwide  in 2018. Slack, on the other hand, had  6 million daily active users  in 2017, with a much more specific context of use centered around productivity.

Not that Messenger should copy everything, but small changes in some places could greatly improve how people use it in their daily life.

Our suggestions, based on ethical design principles:

Menu choices matter a lot when discussing ethical design. They influence what the user perceives as possible choices, and frames their thinking. In Slack, clicking your name in the top right corner lets you edit status to contain any desired emoji and an optional couple of words. This gives colleagues a better idea of current activity and availability. Clicking the prominently displayed bell icon next to your name will give the option to snooze notifications or set a Do Not Disturb schedule.

On top of that, if Do Not Disturb mode is activated, you don’t have to worry about not getting an emergency message, as the sender can choose to disturb you anyway. Now you may think no one pays attention and cuts through any time. But, as Slack users can tell you, that does not usually happen.

However strong text messaging has grown in the last 26 years, we still don’t type to call an ambulance. This little speed bump makes the sender aware of their actions instead of letting them disrupt the receivers’ attention mindlessly.

Many other ways make a messaging platform more attuned to different use cases. These include optional end-to-end encryption or self-destructing messages, which they applied. Also, putting a highly-visible bell icon on the front and not hiding it behind boring settings menus could go a long way in making users aware of the choices they have to protect their attention.

 Ethical Design

Mockup of an “ethically improved” Messenger, created by UX studio

The future lies closer than you think

I hope you have enjoyed this little thought experiment.

Sometimes, discussions with people from the tech industry about ethical design thinking are met with cynicism and disbelief that an industry as a whole can change its values so dramatically over a short period of time.

Our arguments can not only build on morality but they should also make sense from a business perspective. Users are getting more and more aware. This has happened in many industries before, like organic food, and lowering the ecological footprint in households, appliances, electric cars, etc.

The tech industry looks back on less history than the auto does, for example, but it innovates far more. Changes that took a decade for others might take a couple of years or months here.

So, buckle up, lean back, and share this article to make the world a better place, today.

is facebook's business model ethical by design

  • Accessibility , Behavioral Science , Customer Experience , Design , Emotion , Empathy

post authorAttila Somos

Attila Somos , T-shaped product designer working in Amsterdam located in Groningen. I love working together with people to solve business problems and create meaningful products that answer real demand. I am a fierce advocate of humans, be it users, business owners, or colleagues.

Related Articles

How to not be a bad design manager.

  • Design , Employee Experience , Project Management , Team Dynamics

A Masterclass in What to Do (and not to do).

  • The article explores the nuances of effective design management by drawing on the experiences and insights of the author.
  • The author provides practical tips and lessons learned from past mistakes to guide design managers in fostering a culture of growth, innovation, and support within their teams.

Share this link

  • September 12, 2023

is facebook's business model ethical by design

Why Championing Product Design Consultants When I’m the One in Charge of Design?

  • Business UX Leaders , Employee Experience , Employment and Hiring , Human factors , Product design , Project Management
  • The article discusses the strategic advantages of championing product design consultants in the ever-evolving field of user experience design highlighting the benefits of external expertise.
  • September 14, 2023

is facebook's business model ethical by design

The 7 Deadly Snares of Strategy

  • Business UX Leaders , Direct Observation Research , Internal Company Dynamics , Strategy

Cautionary Tales of Dominance to Downfall.

  • The article delves into seven critical strategic snares through real-world case studies, offering valuable lessons for businesses in navigating the complex landscape of modern business strategy.
  • September 19, 2023

is facebook's business model ethical by design

UX Magazine produces the most popular podcast about conversational AI

Join 740,000 designers, thinkers, and doers.

This website uses cookies to ensure you get the best experience on our website. check our privacy policy and.

is facebook's business model ethical by design

IMAGES

  1. Facebook Business Model

    is facebook's business model ethical by design

  2. What is the Facebook Business Model?

    is facebook's business model ethical by design

  3. Facebook Business Model

    is facebook's business model ethical by design

  4. Pin on Liberteks IT Services ∆

    is facebook's business model ethical by design

  5. Exactimo

    is facebook's business model ethical by design

  6. Disrupt Facebook!

    is facebook's business model ethical by design

VIDEO

  1. How To START An Ethical *BUSINESS ONLINE*💼...#shorts

  2. Financial Modeling Example: Building Financial Models for Valuation (Case Study_Accenture)

  3. Facebook Advertisement Lead Generation Case Studies

  4. Facebook Marketing Strategies 2015 for Business: How to do Marketing on Facebook Successfully!

  5. It's Time to Disappear Cables #shortsvideo #business #greenenergy #climateaction #renewableenergy

COMMENTS

  1. Why Are Business Ethics Important?

    Business ethics are important because they help to develop customer and employee loyalty and engagement and contribute overall to a company’s viability. Businesses rely on reputation and a lack of moral guidelines can ruin a reputation.

  2. Exploring Different Pricing Models for Custom Logo Design Services

    When it comes to creating a custom logo design for your business, one of the most important factors to consider is the price. One common pricing model used by many logo designers is the hourly rate model.

  3. What Are Some Current Ethical Issues in Business?

    According to a AZCentral.com, the top ethical issues facing the general business community today include: pay equality, deceptive accounting practices, conflicts of interest and sexual harassment, among others.

  4. Facebook's ethical failures are not accidental; they are part of the

    It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook

  5. An Ethics Perspective On Facebook

    ... business leaders. I asked him what he sees as the big ethical ... What about potato chip companies that design snacks that increase hunger?

  6. Is It Ethical To Work At Facebook?

    ... business model that is fundamentally built on exploitation. “I think ... designed only by the people who have no ethical questions about

  7. Is Facebook unethical by design? A great case study on digital

    As long as money is everything, we'll see more of what some people call ethics dumping, and a lot more talk than walk when it's about operating

  8. Facebook takes its ethics into the metaverse

    Facebook is under fire (again) for its questionable business ethics ... ethical approaches throughout the AI design. “Ethical behaviour can be

  9. Facebook employees flag ethical concerns, rip Zuckerberg

    Yes, we're a business and we make profit, but the idea that we do so at the expense of people's safety or wellbeing misunderstands where our own

  10. Why Facebook's meta-morphosis won't fix ethics headache

    Facebook is once again under fire for its questionable business ethics ... ethical approaches throughout the AI design. “Ethical behaviour can be

  11. Facebook: A Case Study in Ethics

    Facebook's actions were an ethical fail on many levels. The Golden Rule tells us to treat others as we would be treated ourselves. Facebook

  12. Facebook's ethical failures are not accidental; they ...

    Ethical Markets highly recommends this critique of Facebook's business model, much in line with our own “Steering Social Media Toward

  13. What if Facebook goes down? Ethical and legal considerations for

    More broadly, Facebook's business model is under threat from a ... Is this feature helpful for you, or could the design be improved? If you

  14. 7 Ethical Design Examples To Make Facebook Better For Everyone

    We would like you to advocate for ethical design decisions and non-extraction based business models. Second, we want to contribute towards a cultural