I have always been extremely intrigued by the concept of algorithmic gatekeeping and how the algorithms that social media platforms use manipulate the opinions and actions of our society. I have been on Facebook for about half of my life; I created my Facebook page when I was 13 years old, and at that time my mother’s biggest concern was creepy old men trying to friend me and kidnap me. However, her concerns were never about how my interactions and decisions on Facebook would contribute to a massive data set that could manipulate my thoughts, political opinions, or even my mood.
Tufekci discusses the term “algorithmic gatekeeping” as “the process by which such non-transparent algorithmic computational-tools dynamically filter, highlight, suppress, or otherwise play an editorial role—fully or partially—in determining: information flows through online platforms and similar media; human-resources processes (such as hiring and firing); flag potential terrorists; and more” (207-208). In terms of transparency, Tufekci argues that social media differs from that of traditional media (print newspapers, and television broadcasts) because there is no “visible” bias or agenda presented. When you are watching Fox News or CNN, you can see the biases that these corporations have based on their contradicting messages of similar stories that can be seen by anyone. However, when you are on Facebook and subjected to similar bias, you can’t really see it because you just think that your algorithmically created New Feed looks just like anyone else’s, except for the content. We are all looking at the “same” website, but the content presented to each of us is different based on our biases. This concept is called the Filter Bubble, and it is an incredibly important concept when discussing the impact of algorithmic gatekeeping. The Filter Bubble is basically the concept that when we are on social media platforms, we are only projected content that is compatible with the content that the algorithms believe will be most appealing to us. This is troublesome because when you are in the Filter Bubble you will rarely see information from other sides of the same discussion, which just reinforces personal biases, and does not allow for open public discourse and dissemination of ideas and information.
There are a lot of practical implications for algorithmic gatekeeping in terms of public writing. The most important of which is the crowdsourcing information that allows for Facebook and other social media platforms to understand the demographic and psychographic information of users. Tefucki states that in a study done of 58,000 individuals, Facebook was able to understand “the traits modeled—often with eighty to ninety percent accuracy—included “sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender” among others”(210). This information would be incredibly helpful to writers who knew the demographics/psychographics of people who interact with their content. A Huffington Post writer could actively construct articles based on the information of users that tend to interact with their posts more. The same goes for any other publication that would like to interact with this information.
Along with this, if writers understand that there is a higher chance of people with similar ideologies reading their content, they could bolster the information in a more biased way to appeal more to readers. Rather than worrying about the backlash from the “other side”, writers can use more powerful rhetoric to support their side. This is a huge reason as to why CNN and Fox News have such a heated rivalry. People do not want to actively seek concessions to their arguments to understand the full picture of topics; they would rather have the information support their opinions and ideas. This is also why platforms like Facebook have so much political hostility associated with them. No one wants to be told that they are wrong, and with the increasingly divisive rhetoric being used in journalism today, article are more likely to “trigger” people with opposing views.
I’ve found this topic rather interesting as well, and realized how evasive, and honestly a little scary, the internet is when I learned how to use Google Analytics last semester. The amount of information one can learn from just cookies on one website visit is wild.
I agree that the information about these viewers would help public writers exponentially. Throughout the semester we’ve talked about how many writers’ decisions comes down to the audience for example, deciding between print or digital, colors used, tone held…But then again, reading a ‘public’ piece that is so heavily directed towards one sort of people questions the purpose of making it public, since many readers outside the niche would not connect with the piece.
You write: “This information would be incredibly helpful to writers who knew the demographics/psychographics of people who interact with their content. A Huffington Post writer could actively construct articles based on the information of users that tend to interact with their posts more.”
That is one side of it. An organization can pay Facebook for access to this demographic information: https://www.facebook.com/business/products/ads/ad-targeting
But you don’t need that (or, only, need that). Facebook wants to keep you on the page. What sorts of headlines keep people on a page? What sorts of writing styles? Designs? Images? Thinking about what people want to see in the newsfeed environment is also valuable.
cbruch’s point about what is “public” in this context, is really interesting, too. Is it even possible to escape a “niche” if everything is so individually tailored? This might relate to your earlier point about how our newsfeeds seem “neutral”…that they seem to be “what everyone else gets.” How can we make it more transparent that this is not the case? It might feel obvious for some, but for others, I’m not so sure.