Algorithmic Gatekeeping

The first thing I thought while reading Tufekci’s piece on algorithms was: “Wow, I need this when I’m writing my newsletter (first campaign piece).” Although it is pointed out that there is error in algorithms (not unlike human error), I believe that a piece like a newsletter or mass email would benefit in a huge way from some type of computer-generated algorithm. Thinking about my own personal experiences, the number of e-newsletters or articles I receive from organizations or businesses that I subscribe to does not necessarily make me want to stay updated with them, but the ones that I am interested in I always read, and often do something that leads to them profiting because I have done it.

With a school newsletter, I do not really think that any sort of algorithmic system would add anything. Most likely, people who are reading a school newsletter have children in the school or have some other personal stake in the school’s achievements; the people who are receiving it actually want to read it. However, with a newsletter from a company or business like a non-profit who is trying to get word out about their organization could definitely benefit from computed algorithms, since they would be able to focus in on the audience that is going to be the most responsive and therefore provide some sort of benefit to their cause.

When Tufekci uses the term “algorithmic gatekeeping” he is alluding to the idea that these computer-generated algorithms are able to keep certain topics, people, and opinions out, and let others in, therefore subtly shaping simple, everyday decisions in the average social media user’s life. When writing for the public, this could be a game-changer. If you were able to correctly identify what your audience’s views on any given subjects would be, you would be able to tailor the views presented (and the way they were presented) almost perfectly.

Algorithmic gatekeeping, to me, is something that becomes slightly too intelligent when it comes to addressing the public. Although I don’t feel personally endangered by Facebook using my likes and interests to better tailor my newsfeed to me, I don’t really like the idea of it being all run by a computer program that most humans don’t understand. This might be completely wrong, but I believe that the error that comes with having an essentially unpredictable audience when writing for the public is what makes that writing all the more persuasive and comprehensible. With the use of algorithms, the gap for changing peoples’ minds becomes much smaller and therefore prevents the potential shift in opinion for any given individual who makes up the much larger audience that the writer or author might have in mind. Without this room for swaying previous attitudes towards a subject, I think that the author or creator of public writing loses an important part of their initial attempt to produce information in a persuasive way.

When Tufekci explains the example of being able to tell if an individual will vote Democrat or Republican by looking at their likes, interests, activities, and friends on Facebook, I imagine some sort of futuristic scanning of the average human brain, and I feel betrayed, violated, and a little bit scared. He explains, “However, Marketers, political strategists, and similar actors have always engaged in such ‘guessing.’” While I understand this is true, I stand by my point that writing for the public must include some sort of guessing and anticipation for the audience. Although this may be in the future for people who are writing op-eds and school newsletters, I do not believe it needs to be used in everyday life.

 

4 thoughts on “Algorithmic Gatekeeping

  1. I can definitely resonate a lot with what you were saying about his concept. Its definitely scary and a little nerve-racking to know that everything I am doing online is being monitored and collected to be used to target me with certain ads that are tailored to my likes. But at the same time, I can see why it would be revolutionary for public writing. If done correctly, it could make a world of difference in how certain pieces are relieved and how far they will be able to spread their message. For our group, we wanted to specifically target younger college students and families with young children. Now, using an algorithm that would look for new birth certificates would easily be taken as a invasion of privacy. If the algorithm were to target areas near prominent school and colleges, the message would still be much more likes to hit the target audience, but it would do it without having to invade someone’s personal life. It’s definitely incredibly easy for a public document to fall by the wayside if it’s not published effectively. Therefore, if it’s able to be changed and targeted using algorithms, without invading the privacy of the public, it seems like a poor decision not to give it a try; even if the difference is miniscule.

  2. I want to challenge what you said a little bit in your fourth paragraph because I find it very interesting the way you are almost slighting computer algorithms. So, when you say most people don’t understand computer algorithms, that is 100 percent true, but, at least in my opinion, there is still the comforting fact that a human did make this algorithm. Though we might not understand the coding aspect of the algorithm, everyone generally (if explained in words rather than coding language) would understand that they are taking bits of information that are out there on the internet and compiling them to later make decisions. The best comparison I can draw is how detectives gather information in order to hopefully convict a suspect, the computer is just gathering information to convince you to watch this video that pops up on your news feed, in terms of Facebook. To draw a comparison to last week’s reading on rhetorical velocity, when I think about it, computer algorithms are just making rhetorical velocity more efficient. They are pulling all these videos, articles, posts together in a way that tailors to the audience with every single posts the algorithm puts into your personal newsfeed, and I find it hard to say this could have a negative impact on public writing. Overall, what you said I’m not saying it’s wrong in any regard but this is just my perspective on what you wrote.

  3. Although a lot of what you said made sense to me, I have to disagree with a few points. In your post you claim “the error that comes with having an essentially unpredictable audience when writing for the public is what makes that writing all the more persuasive and comprehensible”. I do not find a lot of truth in this statement. Throughout our class the concept of knowing your audience and writing for this audience has been stressed consistently. In fact, knowing your audience and centering your tone, style, design, and genre around successfully reaching them is a crucial factor is successful public writing.
    I find it hard to believe that writing for an unintended audience has more benefits than drawbacks. The use of algorithms, although somewhat unsettling, takes the task of knowing your audience to another level. It goes beyond human capabilities and allows for a personalized public writing experience, which I believe does not hinder the creator of the public writing’s persuasion but instead enhances it. Having a specific audience allows the author to really hone in on what they can accomplish, they can use a very specific vocab and write stylistically to please a narrower audience range. Writing for the masses is hard because you need to include adequate background and vocabulary to suit a wide range of people, when the audience is narrower, using these algorithms, you can reach the same volume of people within a more similar audience.

  4. I find it extremely interesting that your first thought on this article was about utilizing it to further your campaign. It seems like you saw it from a business point of view, who does what it takes to push a product. I, on the other hand, was more appalled at how easy it is for a social media platform to control what we see, using only a computer algorithm. This gatekeeping is immensely problematic. Like you mentioned, an algorithm could exclude certain key topics from the general public. Currently, people get the majority of their news online, especially on Facebook. Therefore, if an organization lobbies Facebook to get a specific event quieted, the average citizen would be kept in the dark. Yet, like you explained, algorithms are perhaps the best novel technology innovated in the past couple of years. We can boost our agency and rhetorical velocity tenfold by appealing to those we target the most. For example, with this algorithm, more interested people will see my message about water scarcity. If someone truly does not care, and effective algorithm sorts them out automatically. As a result, people will most likely say more positive things about my campaign, since only those who care see my advertisements. Algorithms certainly bring up an ethical dilemma for public writers. We could use them to further our message and connect more effectively to our audience, but it sets us up down a very slippery slope in the future.

Comments are closed.