To me algorithmic gatekeeping should be illegal. Tufekci (2015) describes algorithmic gatekeeping as, “play an editorial role—fully or partially—in determining: information flows through online platforms and similar media; human-resources processes.” After reading through this article, Tufekci guides me to define algorithmic gatekeeping as filtering information for specific people to push someone else’s agenda. On a small-scale level this may not be a big deal or impact you in anyway. An example would be filtering your social media feeds so that you are seeing funny dog clips. I think everyone would be fine with this type of manipulation. I believe when it comes to more serious news and information; algorithmic gatekeeping can be dangerous. For a platform to be able to manipulate these algorithms, they can almost control what you think and believe in. When I am on social media, I usually will not search for information rather just read what is there when I open it. If I just believe and trust the first few things I read as facts, these algorithms are manipulating me into a certain way of thinking.
For people unaware that this is happening, which “sixty-two percent of undergraduates were not aware that Facebook curated users’ News Feeds by algorithm—much less the way in which the algorithm works” (Tufekci, 2015) poses a problem. If people are unaware their news is being filtered for them, they may just believe the first few things they come across. “In addition, the Facebook study showed that Facebook is able to induce mood changes” (Tufekci, 2015), this can pose a problem if used in an incorrect way. If the algorithms are always aimed in the right direction this could be a great way to benefit people but due to the complexity and constant changes of algorithms this is hard to control. “In another example, researchers were able to identify people with a high likelihood of lapsing into depression before the onset of their clinical symptoms” (Tufekci, 2015) by being able to identify and help in these ways it could really benefit people. Unfortunately, I do not believe this is how algorithmic gatekeeping will be used in the future.
My beliefs are backed by the examples of Ferguson, the 2010 election, and a hiring algorithm. By the algorithm deciding the Ferguson story was not “relevant” enough to bring to more people’s attention sooner, it was almost as if it had its own agenda or reasoning to hide this story. The entire country was talking about this but yet it still wasn’t “relevant” enough. For the 2010 election, which millions of people were experimented on without their knowledge, showed a possibility in swinging votes one way or another. If a hiring algorithm can possibly discriminate based on race or belief, you must ask yourself what else can a human make an algorithm do and what can an algorithm make a human do?
With the use of algorithmic gatekeeping, my campaign of structurally deficient bridges could be shown on news feeds of people who were never looking for it. People who have liked certain articles pertaining somewhat to this topic could have my campaign pop up for that person to read. I believe gatekeeping would restrict my campaign more than help. I don’t think there are many people who are interested in this topic, making it more difficult for my campaign to be filtered into many people’s timelines.