Last week around this time, I heard about the Cambridge Analytica scandal which now I think is almost hilariously ironic given this reading about algorithmic harms. Before I start stressing how the scandal showcases both positive and negative manipulations, I think it’s important to understand algorithmic gatekeeping and agency.
When Tufekci talks about gatekeeping, I think of it in a literal sense. With a gatekeeper, you have someone that controls whether or not someone/something comes in or whether it comes out. This gatekeeping could be part of a familiar process with someone that typically comes and goes, or it could be an unexpected visitor which then the gatekeeper has to decide whether or not to allow someone in. But unlike people, algorithmic gatekeeping is the flow of information in and out, and it’s decided by computational processes. This algorithm is usually hidden from the public which makes the flow of information much more complex. With algorithmic gatekeeping, it can’t necessarily decide what’s right and what’s wrong. It only corrects where the information goes, or revises what information is released, or who can see what information in order to manipulate data and people’s thoughts. Algorithmic gatekeeping in the generic sense is controlling what content people see, but generally is computed in a set way for large groups of people. Instagram is a great example considering gatekeeping, because rather than keeping information chronological, the platform has a specific algorithm to help get information out that a person might typically miss.
Computational agency also derives from algorithmic gatekeeping, but it specifically focuses on how the information and these algorithms act with outside agencies, this meaning decision making. When I think of agency, this is when the Cambridge Analytica scandal comes to my mind. Although you can think of some algorithms accidentally categorizing information as still being part of agency, another way to look at it is through the consulting firm. Facebook gave access to Cambridge Analytica through a quiz that was a Facebook app. Through this app, Cambridge Analytica then took its research and used it to help the Trump campaign. Through data manipulation, it may have been able to influence how people voted. I felt this was relevant when in an undercover video C.A. bragged that it created the “Crooked Hillary” campaign that it spread throughout Facebook.
Tufekci brings up some interesting points about algorithmic editing in public writing. More people are reading things online, so this could mean that information reaches more people. However, with so many online platforms, maybe social media makes information reach fewer audiences because it is more narrowed depending on what people are interested in, or what algorithms decide to show people first. I think depending on what message is being spread, public writing being posted online versus being posted in a newspaper or appearing on television can be received differently. For example, “Flat-earth believers” I feel are received better online than in a print format. As Tufekci points out, pages that are printed the same way for mass amounts of people to read are easier to challenge. If you only know one thing, and that is that the earth is round, one reading is not going to change your opinion. Based on first-hand experience, I’ve seen many people online consider that the earth is flat based on the way information can be manipulated through algorithms.
For my first campaign piece that is a still image of what mental health problems exist for a college student, someone might immediately love it or immediately hate it. Through my second campaign piece that deals with audio however, it could be receive in many different ways. I feel like if I posted my audio piece online, some words I say could be accepted as fact. If Forbes Magazine posted my work and praised it, it would gain more legitimacy. If Pitt posted it, it would create a large audience since the information I am giving pertains to that specific audience. However, if Fox News posted the same information, my message could be restricted due to the large conservative audience that it has. As a public writer, I will have to consider the content of my campaign piece, and how different platforms might aid or restrict my message. Algorithmic gatekeeping changes the game when writing for the public, because now as a writer, people must consider how they can beat the game or win against an algorithm/computer system so intelligent it can tell more about the reader/writer than they know about themselves.
When I read the first sentence of Tufekci’s article, I was also reminded of the scandal concerning Facebook and Cambridge Analytica affecting the 2016 election immediately. This really shows the huge impact these algorithms have on our daily lives that we don’t take the time to think about.
I find it interesting that you think of algorithmic gatekeeping in a literal sense, when that entire branch of computer science was created as something that can’t easily be replicated outside of the digital world. When you mention that this gatekeeping is hidden from the public, why do you think that makes it more complex? Does the secrecy allow it to manipulate users differently than if it were public, if it would be allowed at all? It’s also intriguing that the answers these algorithms provide can’t be classified as right or wrong, unlike other algorithm functions such as sorting numerically. For me, this ambiguity almost makes this concept scary to think about, since it happens so much.
Your take on the effect of these algorithms on how our writing reaches the public is very well thought out, especially how social media may narrow the reach of our pieces. Normally I would think that it tailors the content to specific people, which would expand this breadth, but the opposite can definitely be true as well, depending on the content of your campaign pieces.
I think you hit on the central concern for public writing in terms of what we can do *as writers* now: “now as a writer, people must consider how they can beat the game or win against an algorithm/computer system”
Let’s take this further. What do we do as writers? How do we frame our writing in a way that works with rather than against a newsfeed’s or search engine’s sorting system? For newsfeeds, we need to write in a way that engages eyeballs in that scrolling environment–the longer the eyeballs are there, the longer the user can see ads. What can we make to make sure our content reaches that goal? How can we craft our writing?
In terms of search engines, how seriously should we take SEO? For instance, here is Google’s manual on how to write for them: https://support.google.com/webmasters/answer/7451184?hl=en