Tufecki describes gatekeeping as “the process by which such non-transparent algorithmic computational-tools dynamically filter, highlight, suppress, or otherwise play an editorial role–fully or partially”. Put rather simply, “algorithmic gatekeeping” is that it is a process that decides what information gets to go past the “gate” and be seen. This process is much more complex than this, but in its most basic form, this is what algorithmic gatekeeping does. The “gate” varies depending on the situation; Tufekci’s main example of algorithmic gatekeeping being utilized is Facebook. The gate in this instance is people’s newsfeeds. Facebook uses algorithmic gatekeeping to determine what content to show on people’s newsfeeds, based off an algorithm that decides what content is most relevant to that person.
Posting work online adds another element for public writers to consider. As public writers, algorithmic gatekeeping should be kept in mind when creating our work because it can affect the audience of the work greatly. Because algorithmic gatekeeping uses information known about a person in order to decide what content should be displayed, this process profiles people. This means that we, as the writers, don’t have all the control when thinking about our audiences. Public writers always have a specific audience in mind when writing and of course, there will be a secondary audience, but algorithms can alter who sees the information, maybe deviating from the intended audience.
One of my campaign pieces included various posts on a Facebook page. Algorithmic gatekeeping could benefit my campaign piece. The algorithm might find people who have mental health issues based off their posts and “likes” of other pages and my page might come up as a recommended page. In this instance, my information will be reaching further than I could do by sharing my page in different groups and whatnot. Further, it is doing the work of circulating the information for me. However, algorithms could limit who sees the page, thus altering my targeted audience. While teens struggling with mental health issues are my main audience, parents of these teens are too. Algorithmic gatekeeping might not decide to show my page to parents. Further, if Facebook doesn’t have any information or indication that parents have teens battling mental health, then Facebook is very unlikely to show my page to these parents. Therefore, my targeted audience isn’t being reached.
While my campaign piece is just one small example of how algorithmic gatekeeping can either help or hinder, each time gatekeeping is used, it can either help or hinder. The problem with this is that it is not known, for each instance, if it will be a good thing to do or not. Gatekeeping doesn’t always allow the “correct” or “right” information through. Instead, what information passes on is based off subjective decision making, according to Tufekci. Should algorithmic gatekeeping be allowed, knowing that there are consequences to it? Should we just accept the good and the bad that comes from gatekeeping? Time will tell if lawmakers will make decisions about the ethicalness of algorithmic gatekeeping. With many instances in the news currently about the ethicalness of algorithmic gatekeeping, lawmakers may be pressured to enact new laws.