Will Social Media Companies Ever Make Fighting Online Abuse a Priority?
By Nick Kossovan
Every day, we rely on social media platforms to engage with like-minded people, promote ourselves, our work, and/or business. Unfortunately the downside of increasing your visibility, especially when you wade into an online discussion with an unpopular opinion, is that you become a magnet for online abuse. Online abuse can be especially relentless if you are a woman, or identify as a member of a race, religion, ethnicity, or the LGBTQ+ community.
I believe social media companies can reduce, even come close to, eliminating, online abuse. The first step: Facebook, Twitter, LinkedIn, Instagram, et al. becoming more serious about addressing the toxicity they’re permitting on their respective platforms. The second step: Give users more control over their privacy, identity, and account history.
Here are five features social media companies could introduce to mitigate online abuse.
1. Educate users on how to
protect themselves online.
I’ll admit social media companies have been improving their anti-harassment features. However, many of these features are hard to find and are not user-friendly. Platforms should have a section within their help center that deals specifically with online abuse, showing how to access internal features along with links to external tools and resources.
2. Make it easy to tighten privacy and security settings.
Platforms need to make it easier for users to fine-tune their privacy and security settings. Users should be able to save configurations of settings into personalized “safety modes,” which they can toggle between. When they alternate between safety modes, a “visibility snapshot” should show them in real-time who’ll see their content.
3. Distinguishing between the
personal and professional.
Currently, social media accounts are all-encompassing of your professional life and personal life. If you want to distinguish between your two “lives”, you need to create two accounts. Why not be able to make one social media account that toggles between your personal and professional identities as well as migrate or share audiences between them?
4. Managing account histories.
It’s common for people to switch jobs and careers and their views over time. Being able to pull up a user’s social media history, which can date back more than a decade, is a goldmine for abuse. Platforms should make it easy for users to easily search old posts and make them private, archive, or delete.
5. Credit cards and/or phone
Much of the toxicity permeating social media stems from people hiding cowardly behind anonymous accounts. Eliminating the ability to create an anonymous account would literally end online abuse. So, why do social media platforms allow the creation of anonymous accounts?
Anonymity allows people to act out their anger, frustrations, and their need to make others feel bad, so they feel good (“I’m unhappy, so I want everyone else to be unhappy”). Permitting the ability to be anonymous permits someone to say things they wouldn’t think of or have the courage to, say publicly, or face-to-face.
Social media platforms could prevent anonymous accounts by asking new joiners to input their credit card information, to be verified but not charged, or a telephone number to which a link, or code, can be sent to authenticate (email authentication is useless since email addresses can be created without identity verification).
All credit cards and telephone numbers are associated with a billing address. Platform users knowing they can easily be traced are unlikely to exhibit uncivil behaviour.
Yeah, I know – handing over more data to social media giants isn’t appetizing, even if it eliminates the toxic behavior hurting our collective psyche. Having to go through a credit card or telephone authentication will give pause for many to ask themselves why the feel they must be on social media. Such reflection is not a bad exercise.
Online attacks have a negative impact on mental and physical health, stops free expression, and silences voices.
Respective platform user guidelines (aka. Community Standards) are open to interpretation and therefore not enforced equitably. Content moderators (human eyes) and AI crawling (searching for offensive words and content) aren’t cutting it.
Social media companies can’t deny they could be doing a much better job creating a safer online environment. Unfortunately, a safer online environment will only evolve when social media companies begin taking online abuse seriously.
~Nick Kossovan is the Customer Service Professionals Network’s Director of Social Media (Executive Board Member). Submit your social media questions to email@example.com. Selected questions will be answered in future columns. Follow @NKossovan on Instagram and Twitter.