Filtered News

To me algorithmic gatekeeping should be illegal. Tufekci (2015) describes algorithmic gatekeeping as, “play an editorial role—fully or partially—in determining: information flows through online platforms and similar media; human-resources processes.” After reading through this article, Tufekci guides me to define algorithmic gatekeeping as filtering information for specific people to push someone else’s agenda. On a small-scale level this may not be a big deal or impact you in anyway. An example would be filtering your social media feeds so that you are seeing funny dog clips. I think everyone would be fine with this type of manipulation. I believe when it comes to more serious news and information; algorithmic gatekeeping can be dangerous. For a platform to be able to manipulate these algorithms, they can almost control what you think and believe in. When I am on social media, I usually will not search for information rather just read what is there when I open it. If I just believe and trust the first few things I read as facts, these algorithms are manipulating me into a certain way of thinking.

For people unaware that this is happening, which “sixty-two percent of undergraduates were not aware that Facebook curated users’ News Feeds by algorithm—much less the way in which the algorithm works” (Tufekci, 2015) poses a problem. If people are unaware their news is being filtered for them, they may just believe the first few things they come across. “In addition, the Facebook study showed that Facebook is able to induce mood changes” (Tufekci, 2015), this can pose a problem if used in an incorrect way. If the algorithms are always aimed in the right direction this could be a great way to benefit people but due to the complexity and constant changes of algorithms this is hard to control. “In another example, researchers were able to identify people with a high likelihood of lapsing into depression before the onset of their clinical symptoms” (Tufekci, 2015) by being able to identify and help in these ways it could really benefit people. Unfortunately, I do not believe this is how algorithmic gatekeeping will be used in the future.

My beliefs are backed by the examples of Ferguson, the 2010 election, and a hiring algorithm. By the algorithm deciding the Ferguson story was not “relevant” enough to bring to more people’s attention sooner, it was almost as if it had its own agenda or reasoning to hide this story. The entire country was talking about this but yet it still wasn’t “relevant” enough. For the 2010 election, which millions of people were experimented on without their knowledge, showed a possibility in swinging votes one way or another. If a hiring algorithm can possibly discriminate based on race or belief, you must ask yourself what else can a human make an algorithm do and what can an algorithm make a human do?

With the use of algorithmic gatekeeping, my campaign of structurally deficient bridges could be shown on news feeds of people who were never looking for it. People who have liked certain articles pertaining somewhat to this topic could have my campaign pop up for that person to read. I believe gatekeeping would restrict my campaign more than help. I don’t think there are many people who are interested in this topic, making it more difficult for my campaign to be filtered into many people’s timelines.

5 thoughts on “Filtered News

  1. I believe with these algorithims there are grey areas like many things that exist in the world. It is hard because these algorithms can be used very efficiently for business but could also be disastrous for large scale events like an election that you mentioned. I am thinking about how I go online shopping and then cancel out of the page and click onto Facebook and I see that same company advertising in the corner. I do understand that that is not a coincidence, its because of algorithms like this.Personally I don’t think that is an invasion of privacy I believe its is a strategics and smart business move by that specific company. Sure it can be a little misleading and manipulative and could cause me to go back to the website and buy the clothing that I wasn’t planning on getting before.

    Of course there are always loopholes in these rules. There are always ways to make this harmless algorithm into a much more controversial topic like the election. The question is though should the small group that abuses this tool ruin it for everybody who is playing fair? Or should we try to enforce stricter and more specific guidelines for the use of algorithms. While you believe that making these algorithms illegal is the answer I believe stricter regulation could be the middle ground which would leave the harmless groups unscathed.

  2. When one thinks of algorithms and the internet you may think of some sort of math problem solver or calculation generator. In the case of algorithmic gatekeeping, there is always a gray area because sometimes it can be frowned upon or even illegal. After reading the article, my definition of algorithmic gatekeeping was similar to what you have in which information or searches are filtered on purpose to achieve a person’s goal or as you say “push someone else’s agenda.” On a smaller level, algorithmic manipulations aren’t illegal acts at all such as deciding the color of a button to decisions as significant as which news article is shown to the public. Let’s say for example a lot of people from Pittsburgh are searching, tweeting, posting, etc. about the Steelers game, then some searches might lead you to sports sites that show the score or highlights of the game or even websites where you can purchase Steelers apparel and memorabilia. Many people are unaware of this filtering and manipulation and this may lead to things such as Facebook being able to “induce mood changes.” These algorithms may show people heartwarming videos such as cats and dogs or even funny videos, or they could cause an emotional movement and have social media plastered with videos of destruction, terrorism, and other disasters. What comes to mind to me is when there is a terrorist attack and that it seems that those stories are the only thing all over social media right in your face. This most likely will get a reaction out of people in which they would donate money, volunteer their help, or even enlist in the military in extreme cases. One of the more major events that comes to mind is the 2016 election. Everyone from crazy conspiracy theorists to the not-so-knowledgeable-of-politics citizens probably has something to say about the election being rigged or anything along those lines. Things such as algorithmic gatekeeping could have been a large part in this. If someone would search for Trump or like one of his revolting tweets, they may begin to see videos of Hillary Clinton being “crooked” pop up on their timeline and begin swaying people’s political ideas. All in all, algorithmic manipulation will always be used but the difference between right and wrong will always walk a thin line.

  3. I also agree that while algorithms should not be made completely illegal, their use in cases like elections should be and general filtering of information should be heavily monitored. The use of algorithms on social media always spooks me a bit whenever I see products I was browsing online pop up in Facebook ads within a few minutes after searching for them, but this article really put the serious negative effects such systems can have on the public.
    One line that stuck out to me most in the reading when comparing the use of algorithms for standard data analysis with the use of things like Facebook algorithms was “In subjective decision-making, there is no such “correct” answer with which to anchor and evaluate the algorithm’s operations.” When it comes to analyzing normal data sets for economic or sales purposes it may be necessary to filter the data, but when it has to do with people’s opinions and understanding, filtration must be taken very seriously. The information that is made readily available to people through social media can have a major impact on the opinions they develop on current events, politics, and much more. Like you said, I also often don’t go searching for information when looking on Facebook especially when just going on it to kill time, therefore the little information being presented could negatively affect my way of thinking. I think most young people go on social media for entertainment (or procrastination) purposes for brief periods throughout the day, and often only bother to read what really stands out to them. This means that they may be swayed on certain issues based on the few articles that they glanced over while on sites like Facebook.

  4. I completely agree with everything you just said. I find it extremely scary that social media has this type of control over us. It seems as though a depressed person that likes a bunch of depressed tweets will be more likely to have a bunch more depressing tweets pop up on their timeline, which will make them fall deeper into this hole and not help them get better. On the other hand if Twitter or Facebook algorithms were used to detect this depression and do the opposite and choose to show more happy posts, this could be benificial to those with depression because they will see happy and optimistic things that will bring them up rather than drag them down. Also, the whole Ferguson ordeal with Facebook conveniently finding it “not relevant” is a little ridiculous to me. I feel like there are some conspiracies that can be made with that. It makes me think that social media can hide what we see and make us think differently about certain things. They can take away our right to protest without us even knowing. Its terrifying when you think about it that we are letting a computer program manipulate us in this way. It is as if we are losing all control of humanity and self thought.

  5. I agree with your statements. I believe that shoving ideas down the general public’s throat is a negative way to bring attention to a subject, however. Some advertising or campaigning teams may argue that algorithmic gatekeeping is a form of rhetorical velocity because it allows a certain idea to travel across social media at an accelerated rate.
    For example, if a certain carbonated beverage, let’s say Pepsi, wanted to increase profits, and realized that less and less people have time to sit down and watch TV to view their commercials, they may want to push their product through social media. Pepsi would create some algorithm and purposely filter your media so that only their commercials are present. This would eliminate competition with other similar name brand carbonated drinks.
    Another example of algorithmic filtering includes campaign teams for certain political parties. Government parties during election season will purposely filter your feed to strictly show propaganda for a certain political party. It is believed to increase the basic popularity of that candidate and make that certain political party more apparent to the general voting public. This would in turn cause rhetorical velocity and make the people want to vote for that party.

Comments are closed.