Six excess harms of social media

Steve Correl
4 min readJan 11, 2021

Certain harms are much more severe on social media than on other internet software. Curing them requires understanding how they drive profits.

Social media platforms offer highly attractive and valuable information. They also gather vast and intimate data on users, and are almost unrestricted in deriving revenue from user activity.

Platforms profit from a wild west of information where content limits are minimal, and limits generally reduce profit. The only incentive to curb behavior occurs when speech attracts outsize political attention — when that political attention occurs, politicians are slow to react, blinded to the magnitude, and apparently clueless to how to target the problems. Yet the effects are recognized by people across the political spectrum.

Any effective solution first requires clarity on the specific harms:

  1. Terrifically detailed information on users. Platforms can surveil every social view or action you take, with few limits. They race to exploit this surveillance as computation grows and as AI becomes more powerful. This privacy problem goes way beyond what you knowingly divulge. It includes insight into your behavior as you use the platforms, combined with your behavior elsewhere while using your same login identity.
  2. Users publish harmful content for their own purposes, which platforms may not detect nor moderate promptly. The platform actively finds the audience for these malevolent users too! Platforms try, often ineffectually, to prevent the most egregious content, or they may delete it rather than de-platforming the offending user. Platforms bear no tangible cost for lazy enforcement of standards, nor for ineffective standards. Let’s call this the laissez-faire business decision that it really amounts to.
  3. Polarization is monetized because platform algorithms constantly strive to serve up content targeted to an ever-deepening knowledge of the user. As ideological content is suggested, and re-posted and becomes viral, ideological homogeneity is encouraged. Users who seek more nuance do so under bombardment with polarized material. Platforms have a financial incentive when people are driven by conflict, existential threats, controversy, intense emotions. Algorithms remain shrouded and proprietary, so users cannot even tell the platform to desist!
  4. Absent guardrails, fake news is just as valuable, if not more valuable, than legitimate news to these platforms because it attracts views, or polarizes, or manipulates. Social media platforms claim policing fake news is not their role, while they take weak action, often too late, that satisfy few of their critics. Genocide has literally been incited on social media. On platforms with world-class talent and ability to analyze how users see content, there is no systematic way to measure actions taken on this problem!
  5. Micro-targeting utilizes user detail to craft messages designed to sell to, to manipulate, or to scam users by being irresistibly relevant. At best, micro-targeting can show interesting and useful activity of other users, but the darker side is that any party — no matter the motive — can hire this service to promulgate its message. The more effectively platforms mine our every action, the more powerful this technique.
  6. Lack of reliable identities can be used to disguise and promulgate propaganda and deceive users as to the legitimacy of information. A false identity can fool users into believing propaganda and hide criminal activity. Without reliable identities, there are no consequences for bad behavior and no ability to judge the true source of information.

The invisible hand of the market will not soon solve them on its own — that much has been obvious for some time. The harms are fundamental consequences of the business model.

Solutions that will fail to solve the harms

We must align the financial incentives away from these problems, so social media loses revenue when they occur. In future posts, I will cover potential solutions. For now, let’s set aside certain misguided but popular ideas:

  • Social media cannot be treated just like edited publication, because its essence is individual social networking. New ground rules are needed. Revoking Section 230 of the Communications Decency Act would be like holding a party host accountable for anything anyone says at a non-stop party — no point in having that party, no point in going to it either!
  • Suing social media companies for monopoly is not a remedy. The harms are not due to monopoly. The FTC suit against Facebook won’t solve the problem, even if it succeeds — which it probably won’t. It won’t affect other platforms. The “Pizzagate” conspiracy theory originated on Twitter (and elsewhere) and is now thriving on TikTok; suing Facebook won’t change that.
  • Trying to make internet devices less addictive under the theory they are too psychologically appealing is regressive and impractical. Internet devices are appealing because they are relevant and valuable, and humans will adapt to their appeal, just as they adapted to other technology. In any case, social media harms require a more specific solution.
  • Mandating that social media corporations cannot exhibit political bias obviously runs counter to the First Amendment. Their algorithms have all kinds of bias, some intentional and some not. We can’t outlaw bias, but we can hold them accountable to specific practices to build trust in how they handle information.

Social media is distinctly different from print media or from message distribution because of amplified, interactive participation, coupled to massive scale. We cannot rely on the desultory, conflicted efforts of platforms to moderate its harms. A political blunt instrument to solve these harms will not suffice; a solution must be technically savvy to be feasible, and it must be verifiable to create trust.

--

--