The first thing I thought while reading Tufekci’s piece on algorithms was: “Wow, I need this when I’m writing my newsletter (first campaign piece).” Although it is pointed out that there is error in algorithms (not unlike human error), I believe that a piece like a newsletter or mass email would benefit in a huge way from some type of computer-generated algorithm. Thinking about my own personal experiences, the number of e-newsletters or articles I receive from organizations or businesses that I subscribe to does not necessarily make me want to stay updated with them, but the ones that I am interested in I always read, and often do something that leads to them profiting because I have done it.
With a school newsletter, I do not really think that any sort of algorithmic system would add anything. Most likely, people who are reading a school newsletter have children in the school or have some other personal stake in the school’s achievements; the people who are receiving it actually want to read it. However, with a newsletter from a company or business like a non-profit who is trying to get word out about their organization could definitely benefit from computed algorithms, since they would be able to focus in on the audience that is going to be the most responsive and therefore provide some sort of benefit to their cause.
When Tufekci uses the term “algorithmic gatekeeping” he is alluding to the idea that these computer-generated algorithms are able to keep certain topics, people, and opinions out, and let others in, therefore subtly shaping simple, everyday decisions in the average social media user’s life. When writing for the public, this could be a game-changer. If you were able to correctly identify what your audience’s views on any given subjects would be, you would be able to tailor the views presented (and the way they were presented) almost perfectly.
Algorithmic gatekeeping, to me, is something that becomes slightly too intelligent when it comes to addressing the public. Although I don’t feel personally endangered by Facebook using my likes and interests to better tailor my newsfeed to me, I don’t really like the idea of it being all run by a computer program that most humans don’t understand. This might be completely wrong, but I believe that the error that comes with having an essentially unpredictable audience when writing for the public is what makes that writing all the more persuasive and comprehensible. With the use of algorithms, the gap for changing peoples’ minds becomes much smaller and therefore prevents the potential shift in opinion for any given individual who makes up the much larger audience that the writer or author might have in mind. Without this room for swaying previous attitudes towards a subject, I think that the author or creator of public writing loses an important part of their initial attempt to produce information in a persuasive way.
When Tufekci explains the example of being able to tell if an individual will vote Democrat or Republican by looking at their likes, interests, activities, and friends on Facebook, I imagine some sort of futuristic scanning of the average human brain, and I feel betrayed, violated, and a little bit scared. He explains, “However, Marketers, political strategists, and similar actors have always engaged in such ‘guessing.’” While I understand this is true, I stand by my point that writing for the public must include some sort of guessing and anticipation for the audience. Although this may be in the future for people who are writing op-eds and school newsletters, I do not believe it needs to be used in everyday life.