Social Media’s Ultimate Dilemma

With social media blocking Trump after the 06JAN2021 insurrection, my initial reaction was “wow… finally!” But after reflecting a bit more, I had two follow up questions… 1) why now? and 2) what will happen in the future? Specifically, the 2nd question is really about the precedence these companies are setting. Throughout the 4+ years of Trump tweets, discussions about how much of the false, provocative, agitative tweets only really came to the forefront during the 2020 elections where misinformation had to be addressed (USA Today in 2020, CNN in 2016). Clearly there’s been discussion and pushback on how best to handle the information. Let’s also be clear here… the 1st amendment “free speech” allows free and public expression without any censorship by the government but there are exceptions to this. Most people will say private companies are excluded from the 1st amendment because they are not government. Although true, I think though many companies will try to follow the spirit of the 1st amendment to allow the speech. I think that’s why Twitter and Facebook allowed Trump tweets and posts for so long.

So why now? I think it’s a matter of convenience. First… Biden defeated Trump in the 2020 election. Second… Twitter and Facebook had been slowly escalating the method of how to censure Trump’s false posts. Third… If it were not for the insurrection on 06JAN2021, I don’t think Twitter/Facebook would have blocked Trump. But because the language in Trump’s tweets potentially caused the violence, he “broke” the EULA regulations and thus de-platformed. Let’s be honest here. This de-platforming was a long time in the making. Trump has consistently spread false information during his 4 years in office (over 30,000 claims!!).

What will happen in the future? I’m not sure. I think the social media companies need to establish a clear transparent system of governance and punishment related to the spreading of false information. Trump’s actions has exposed a huge weakness in the idea of balancing social media virality, factual information, and speech. This is what I think social media companies should do:

  • Social media companies should donate to an independent fact checking service. This service can be an internal company service or a 3rd party like factcheck.org
  • When any user makes a claim, that claim should be fact checked.
  • If the claim is proven false, then the user is tagged and also educated about why this claim is false.
    • The claim will not be shared and any subsequent shares also cannot be shared again to minimize the virality of false information.
    • The user is also warned about continuing to post false information.
  • If the user makes another false claim, the user’s post will be marked as false similar to what Twitter did by flagging Trump’s false tweets.
    • This user is again warned and educated.
    • The post will not be able to be shared minimizing any virality.
    • The user will then receive a demerit. Demerits “expire” after a week of good behavior (aka no false posts).
  • If the user again posts a false claim, the post is flagged as false.
    • The user is warned and educated.
    • The post will not be able to be shared minimizing any virality.
    • The user gains another demerit.
    • The user is also banned from posting for 1 hr. Every demerit the user gains adds 30 minutes to the ban.

This should allow social media companies to limit the spread of false information while attempting to balance the “free speech” doctrine. I also think that social media companies should also define what they deem to be hate speech or speech that leads to violence or speech that have threats of violence. Then the same demerit based system should also be used for those types of posts. Can social media companies implement this?