3 Tips for Making Social Media Less Antisocial
Julianna Kirschner

The Russia-Ukraine war, and the role of social media in relation to it, is reigniting heated discussions about the dangers of social media. But while the debate over the risks and benefits is ongoing, one thing that nearly everyone agrees with is that the power and sway that social media has over our society is undeniable. For my part, I have been researching social media platforms and their influence for over 15 years and my conclusion has been that social media was never truly social. In fact, in some ways it could be said to be antisocial, amplifying existing problems in our society such as inequality and hyperpolarization. Each major news event—the U.S. presidential election, the pandemic, and now the war—becomes another opportunity to make matters worse, and we must all become more aware about how and why these platforms have the harmful effects that they do and how we, as users, can help minimize those problems.

Early Beginnings and the Roots of the Problem

When social media was still a fledgling medium in early 2000s, the idea behind it at first was to facilitate the human need for connection and to create a virtual space for connecting with people outside of your IRL (in “real” life) social circles. Indeed, these platforms did make it possible to cast a wider social net and virtually interact with people whom we might not otherwise have been able to. In many ways they still do.

The problems started as these platforms massively grew in popularity and monetization entered the picture. Once that happened, social media was no longer a product that users could use to connect with others. Instead, the users themselves became the product, with big data and safeguarded algorithms making it possible to track, analyze, predict, and influence our behavior on a scale that’s never been witnessed before. These algorithms keep you in an endless feedback loop where every online move you make is leveraged towards marketing things that appeal to your unique interests and tastes—not just products and services, but even ideologies as well. This is where we get to some of the darkest and most disturbing problems of social media.

The Anti-Social Dilemma

If relentless tracking and marketing were the extent of social media’s problems, then that would be one thing, as problematic as that is. But the fact is that social media platforms are replicating and amplifying existing problems in our society.  Moreover, we have now reached the point where many researchers and experts who have studied the medium are legitimately worried about our collective mental health, the corrosion of civility, and even the breakdown of our very democracy.

Take mental health. There’s a fairly wide body of research showing how frequent use of social media can lead to or aggravate mental health challenges such as depression and anxiety in both teenagers and adults through various mechanisms such as the triggering of envy or FOMO (“feelings of missing out”). Such mental health challenges often intersect in tragic ways with problems unique to social media such as cyberbullying, which as many as 59 percent of U.S. teens have personally experienced.

Another problem is that of divisiveness and hyperpolarization, which has been exacerbated by social media due to the way these platforms confine people in ideological bubbles and echo chambers where behavior is policed and only certain kinds of content can be shared, making it difficult to interact with anyone who may have different points of view. Such differing views are hidden from our feed, creating a virtual reality in which we come to believe our own opinions as being that of the majority. This causes self-perpetuating feedback loops in which the more isolated we become within our bubbles, the easier it becomes to fall into the disinformation trap and be manipulated.

Unfortunately, since sensational, divisive, and polarizing content drives more engagement, they are valuable to advertisers and therefore valuable to social media platforms who rely on ad revenue. Even war is fine for these platforms if it is profitable, and it is indeed profitable as evident in the proliferation of misinformation surrounding the situation in Ukraine.

What People Can Do to Help Themselves

Despite all these problems, there are some ways that users can minimize their own exposure to problematic content.

  • Be mindful of what you click: The algorithms that power social media platforms work by filtering in what they think you want to see. Every time you click or “like” something, you are giving the algorithm more data to potentially use against you. For this reason, some people intentionally install browser extensions that randomize your reactions to posts as a way to “confuse” the algorithms. You could also just manually click on a diversity of content to achieve the same purpose, though this obviously requires more effort. Another way is to simply be more conscious about what you click or which reaction emojis you use. Whatever the algorithms may be currently showing you in your feeds, if you’re extra mindful about what you’re clicking on, liking, sharing, and commenting on, that may help limit the amount of misinformation in your feed.
  • Use deliberate searching: A common function widely featured in most social media platforms is the ability to search for specific content. This can be useful if you already know what kinds of content you’d most like to see, or see exclusively, on a given platform be it cute animal videos or objective news coverage (as opposed to “news” designed to trigger negative emotional reactions). Using specific search terms directly provides the algorithms with data it can use to populate your feeds with the content you prefer versus letting the algorithms use the hundreds of other data points they have on you based on your online behavior.
  • Tweak your settings: On most of the popular platforms such as Facebook, Twitter, Instagram, and TikTok, there are ways to see what topics these apps think you’re interested in. While this wouldn’t single-handedly fix all of social media’s problems, checking these settings and adjusting them to more accurately reflect your true interests may help mitigate some of the aforementioned problems.

Of course, these suggestions are only pertinent if (a) users actually want to minimize being potentially manipulated and (b) they’re willing to put some effort into what’s usually intended to be a light recreational activity. If they aren’t particularly worried about manipulative content, or aren’t willing or able to put in effort, then the remaining options are to just use these platforms and accept the risks or to perhaps step away from using them, temporarily or permanently. Temporary breaks, at least, can be helpful for well-being and should definitely be considered.

Finally, while the area we have the most control over is our own social media behaviors, the brunt of the responsibility rightfully lies with the social media companies that have created their algorithms. It is they who have the resources and ability to fix these problems as easily as flipping a switch if they wanted to. But until we have the laws and regulations to essentially force them to do this—and it’s difficult to see that happening anytime soon, due to lobbying—the onus and the decision-making, for the time being, will necessarily have to remain on us, the users.

Julianna Kirschner, Ph.D. is a Lecturer of Communication at the University of Southern California Annenberg School for Communication and Journalism. Dr. Kirschner’s research focuses on group dynamics and online behavior, and her dissertation has received awards from two academic organizations.