Tufekci explains algorithmic gatekeeping thoroughly: “Algorithmic gatekeeping is the process by which such non-transparent algorithmic computational-tools dynamically filter, highlight, suppress, or otherwise play an editorial role–fully or partially–in determining: information flows through online platforms and similar media; human resources processes (such as hiring and firing); flag potential terrorists; and more” (Tufekci 208).
In terms of social media, I understand this as complicated computer algorithms working together to affect what you see. It can be to hide or to promote certain things. It works together with what Tufekci describes as “big data” or this large data profile created over time through all your interactions on the web. This could explain why when you go online shopping for shoes, you might later see a Facebook ad for shoes. Or, it could be used to create a sort of social bubble, where Facebook only shows you the posts from friends whose messages politically align with yours. It could be to promote different sources; someone who constantly reads Huffpost or Fox might get offered other politically-leaning sources.
I think the negatives of this are vast and quite scary in terms of how much information accumulates about you and how much the internet can know about you, like using Facebook “like” data to predict your traits (210). But I think there can be positives. If an internet source can know exactly what you’re interested in and looking for, it might be able to better present you with the information you’re looking for. Or, on a deeper level, if technology can understand so much about you, it could perhaps calculate how different governmental policies might affect your life. I think the potential is huge–both in a dangerous and positive way. The question becomes, then, is it ethical to create and manipulate information in a way that can be so powerful?
In our campaign, we are creating a website. Each piece created is being uploaded to this website. If we promote this on social media, algorithms might determine to whom our website or campaign pieces might be of interest. It might then put it somewhere in their news feed. If they like it, more work like ours could pop up. Or perhaps their Facebook friends might be alerted to it. Apparently, it could be placed with other posts with similar messages to try to influence moods, “likes,” and/or posts. On the other hand, the algorithms might deem our work irrelevant and keep it out of newsfeeds. Or, it could promote our work into newsfeeds where people might try to discredit our work. As Tufekci describes, these algorithms are complex, and it is not entirely clear what their prime focus is, or that they have one. It might depend on who/what is running the algorithms and those people/that corporations interest in using said algorithms.
Although Facebook is a huge collector of data, I don’t necessarily think that they intend to use this data malevolently, but instead perhaps for efficiency and/or economic gain: Facebook’s goal is to keep users scrolling through the page. The more scrolling means more posts and ads, clicking on or “liking” those posts or ads leads to more data collected about you. The more data Facebook has about you, the more they can tailor content toward you that you are interested in, and hopefully the more you scroll.