HW#3 – Outsmarting The Personalization Algorithms

In chapter 8 of The Filter Bubble by Eli Pariser, the author provides suggestions about ways we can address the problems of the filter bubble that widely exists today in our daily lives. Eli Pariser explains to us that there are three primary categories for which we can respond to the effects of the filter bubble. The first are actions from companies, the second are actions from the government, and the third are actions from individuals.

Actions from companies in theory, is the best solution for addressing the concerns relating to personalization and privacy. If companies were transparent in their motives and they admit the use of personalization then users would be knowledgeable about the effects from using search engine X (Google) vs. search engine Y (DuckDuckGo). In the latter the search engine provider clearly and explicitly states that they do not track your behavior and they do not filter your searches whereas in the former there is no mentioning of such details that is clearly visible to the user and by clearly I am referring to not having to go through heaven and earth to be able to find the disclaimer notice. “A visitor to a personalized news site could be given the option of seeing how many other visitors were seeing which articles…of course, this requires admitting to the user that personalization is happening in the first place, and there are strong reasons in some cases for businesses not to do so. But they’re mostly commercial reasons, not ethical ones” (Pariser 232). So in a perfect world, if companies were transparent in their behavior, then the users of their services will have, at the least know what other users are engaging in and what sources others have visited. However, the world is not perfect and companies need to act in regards to their best interests and in the interests of their stakeholders, hence, foregoing the interests or their customers (the users).

Actions from government are the weakest form for addressing the issues relating to personalization and privacy. I am currently registered on the do not call list and have been for the past four years or so, however, in the past three years I have been bombarded with countless calls from telemarketers and surveyors. One might ask if the government is really monitoring these lists or have they just given up and have moved onto more important matters? In my opinion this is why I believe the Do Not Track list is the weakest method in addressing the problems of privacy and personalization. “But Do Not Track would probably offer a binary choice – either you’re in or you’re out – and services that make money on tracking might simply disable themselves for Do Not Track list members…And as a result, the process could backfire – “proving” that people don’t care about tracking, when in fact what most of us want is more nuanced ways of asserting control” (Pariser 238). If you use Facebook daily, and fall into the category of being a mouse (visiting the same site daily for an X amount times) and have opted in for the Do Not Track list, the worst of the worst may occur; you may be blocked from using Facebook. Who are we to say that they cannot ban/lock/disable your account for not “complying” with their user policy agreement? And as Pariser has said “most companies reserve the right to change the rules of the game at any time” (Pariser 239). Facebook could have the right to disable your account due to noncompliance with their policies. Let’s also not forget that if users are exhibiting the mouse like behavior that Pariser outlines in the chapter then it is most probable that the users in our previous Facebook example will continue to use Facebook vs. opting in the Do Not Track list. If users need their daily digest of their Facebook news feed, they will most likely continue to use it even if it means having Facebook tracking your every move. Think of smoking for a moment; people do it even though they know it is bad for their health, yet why do people continue to smoke? Because it is ADDICITNG – This is exactly the same reason for why as much as we love using Facebook and Google, we will continue to use their services even though they are tracking us because without it, we would be essentially going through “withdrawal.” Just like with the smoking example, in essence we are sacrificing our health (our privacy) for a moment of satisfaction and pleasure –using Facebook, Google, etc.

Now the only solution left for defending ourselves against the effects of the filter bubble are actions from individuals. “In courts around the world, information brokers are pushing this view – everyone’s better off if your online life is owned by us” (Pariser 239). This may be the most frightening comment to be posed to the users of the internet. It is as though they are saying that the cyber world (the internet) should be governed by a dictator or an oligarchy. The internet is supposed to be a leveled playing ground for all users; a world where the users are granted certain rights (rights to privacy). But again this is not a perfect world, and users continue to make the same mistakes of not knowing what information is being collected from us before it is too late. We have to ask ourselves – Who knows myself more than me? – No one, and this is why in order for us to be able defend ourselves against the filter bubble, we need to take actions into our own hands and not leave it to the government and companies because if users truly want control over their activity on the internet, then the users should be responsible for such control.

It is my strong belief that to be educated and knowledgeable about the filter bubble and the existence of such privacy and personalization concerns is one of the most important factors for being able to address these issues raised. And this is why I believe that the strongest idea for addressing the problems of the filter bubble is to develop Algorithmic Literacy. “It doesn’t take long to become literate enough to understand what most basic bits of code are doing” (Pariser 229). If Facebook tells us transparently that they are collecting our information for reasons x and y, it still doesn’t tell us how and what type of data is being collected. The most effective way for users to be able to understand how Google filters our search results or how Facebook filters our news feed is to understand the fundamentals and the basics of how their algorithm works. The personalization algorithms that companies use are all very similar in ways in which the server (the host) sends information from the users into a database (where all your data is stored) based off set patterns of behavior. We can begin to understand how such algorithm is formulated by first knowing the language of coding.

By understanding how coding works, and what a certain line of code does, it ultimately gives you the behind the scenes all access pass into how a company’s personalization algorithm works. “We need to recognize that societal values about justice, freedom, and opportunity are embedded in how code is written and what it solves for. Once we understand that, we can begin to figure out which variables we care about and imagine how we might solve for something different” (Pariser 227). If everyone knew coding then we could in reality alter our behavior to counteract the existing code that is already in place. To put it simply, if Google’s personalization algorithm does X, then we the users should do Y in order to prevent us from falling into their personalization trap. For example, when we enter a query into the Google search bar, instead of clicking on the top results, or even the first page for that matter, we should click on the second page and start clicking on those links; that way we can avoid being trapped in their personalization algorithm because we can strongly assume that the top results will be 99% tailored for each individual based on their previous behavioral preferences. By understanding code we can understand more of our behavior on the internet, we can begin to understand how a site works, how data is gathered, retrieved, stored, and assimilated. If users don’t want to be in the passenger seat anymore and having the companies personalize their preferences, then the users should start sitting in the driver’s seat – Take control of your behavior, acknowledge and admit to yourself that tracking exists, and is unavoidable to a certain extent because even if you select the second page on a Google search result every time, Google may start personalizing this newly altered behavior of yours. However, by recognizing that such algorithms exists to impede on your privacy and to create this filter bubble of yours, we can at the least be able to avoid it for a period of time. Understanding the code behind how any algorithm is executed is crucial because a code is simply line(s) of text that does either X or Y. Codes are definitive and will not do something that isn’t defined within the lines of the text. So if we know the basics for how a generic personalization algorithm works, then we (the users) will have a chance to be able to take advantage of this knowledge and alter our behavior in order to avoid being filtered – we can outsmart the personalization algorithms!