Surveillance is privacy. Algorisms know best. Facebook is Friend.
It’s hard to tell which dystopian novel we’ve wondered into recently. We’re somewhere between Orwell’s 1984 (Big Brother could be the NSA or Facebook or Google?), Atwood’s A Handmaid’s Tale (so long reproductive rights!), a dash of Veronica Roth’s Divergent (Factions divided sounds like a familiar narrative), and hopefully not quite at Hunger Games levels.
Storytelling is as historically old as hominins. Constructing our own narratives is a biological drive according to Darwin’s theory of sexual selection (as opposed to his idea of natural selection). Facebook gives us a prefect platform to shout our stories from the proverbial mountaintop. “Here I am! Pay attention to me! What I’m saying is unique and important!” Then we sit back and count the likes, the loves, the smiley faces, the validation that someone cares.
Facebook is designed with this need in mind. Every week a new update rules out that knows us better, is able to predict what we want, shows us certain friends and not others, and caters ads to us specifically. This is what “algorithmic gatekeeping” means. Facebook is the lens in which we view our social media lives, we are subject to Facebook bias, and we are blind to Facebook’s control. We may be the head, but Facebook is the neck.
The author explained the concept of gatekeeping in the following excerpt: “In this analogy, your phone would algorithmically manipulate who you heard from, which sentences you heard, and in what order you heard them—keeping you on the phone longer, and thus successfully serving you more ads”. Facebook’s algorithms show you what they want you to see for whatever purpose they want to and you have no way of knowing the degree of this manipulation.
The internet is the last frontier, the modern Wild, Wild West. Laws, ethics, and oversight are playing catchup to the climate of social media. Events like “gamergate” in which “trolls” threatened self-identified female gamers, incidents of revenge porn and general bullying, hate speech and culturally insensitive remarks have received public backlash with critics called for social media companies to better monitor the content that users share. The question remains, to what degree should a social media company censor content or users? Public backlash has lead to debates about how much social media and the consequences of a person’s actions on those sites should bleed over into a person’s “real life”. Social media lifts up idols to watch them crash. Last election, Ken Bone, a bespeckled man in red sweater has hailed a hero, only to have internet history dug up and the public tide to turn on discovery that he was not as innocent as his visage alluded.
Facebook seems to be following the “don’t ask for permission, ask for forgiveness later” mantra. According to the article, Facebook revealed after the 2010 U.S. election that it ‘promoted’ voting by ‘nudging’ people to vote by suggesting that their friends had also voted. Encouraging civic engagement is fairly innocuous. Hiding events like the Ferguson protests could be harmful and suggests that Facebook algorithms suffer from the same racial bias as its creators.
For any group creating campaign pieces for social justice issues, especially if those issues are intersectional with structural racism and classism, it will be important to remember the example of Ferguson. Facebook’s algorithm will likely filter out your content. At this time, it is hard to determine how many people will see your content. Put a cheery spin on a social justice issue and you might just sneak past the algorithm.
Until Facebook is held to some measure of accountability and transparency by an oversight organization (possiblly the FCC), we will be ruled by the almighty algorithm.
There is still hope. The algorithm isn’t perfect. An artist’s rendition of a robin on a Christmas card was recently flagged as “indecent”. The controversy has now given her more coverage than her post would have originally received. You can decide for yourself here:
For additional information on our evolutionary history with storytelling, go here:
For more information on how social media and smartphones are turning you into a validation zombie:
https://www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/
For an more about racial bias in algorithms:
The author of this blog raises a seemingly straightforward but immensely consequential question: “To what degree should a social media company censor content or users?” Question of the decade. Forgoing the complex argument of free speech, the protections on content of a certain nature, and government involvement amidst every step of the process, I’d like to just focus on the kind of social contract between social media companies and their users. By 2017, the human race has accomplished some awesome and terrible things. Technological innovations have abounded, but systemic racism is alive and well. People across the globe are more connected than ever, but with the positive exchanges come the negatives of terrorism and spread of disease. In a world like this, absolutely everyone who takes part in the system makes some compromises. Ignoring the Webster definition, Netflix’s Stranger Things describes compromise as being “halfway happy.”
I think most social media users have to accept that they’ll be halfway happy when it comes to their online habits and observations. Social media accounts, for the most part, have no financial costs associated with registering. You must have an internet connection and presumably pay for it, but that has little to do with companies such as Twitter or Facebook. Instead, the tradeoff for maintaining a profile on these sights comes in the forfeit of privacy. Companies conduct crowdsourced data collection with or without the knowledge of users, using the information for generally benevolent if not invasive purposes like social experiments. Personally, I wish Facebook didn’t who the people I interact with most are. I wish they couldn’t guess my sexual orientation or how much I like my immediate family. Nevertheless, I know that they probably do, and I begrudgingly pay this price for access to their site. I am halfway happy with our arrangement. Though I might not call the exchange of information for access an entirely fair deal, it is an adequate one, and one I am willing to live with.
I think this post gets at one of the biggest issues with “algorithmic gatekeeping.” It is crazy the way that technology curates our information. As the post says, algorithmic gatekeeping “knows us better, is able to predict what we want, shows us certain friends and not others, and caters ads to us specifically.” But the real issues come when we consider the things that we are not seeing or learning about.
Following the example in Ferguson protests, we can see how having information curated for us is an issue. There should be no instance where an algorithm suppresses news that is breaking and vital for people to know about simply because the algorithm doesn’t deem it necessary. However, I think it is important to make the distinction between purposeful deception and poor programing. There needs to be more oversight and transparency if algorithmic gatekeeping is to continue. There needs to be a way for people to learn new information that is not processed and disseminated to him or her because he or she will like it or has liked similar information. Facebook needs to go beyond being cautious; it needs to find a way to change such a powerful program to be less limiting and exclusive.