Steering the “Right” Way

In the article algorithms are considered an interactive element in analyzing users’ online profiles. These analytics are what Tufekci refers to as “algorithmic gatekeeping.” In my inference the gatekeeping refers to the “answer,” for example the user finds him/herself stumbling upon a Buzzfeed article about what kind of puppy they are on their Facebook newsfeed after viewing their local animal shelter webpage. The “answer” Tufekci mentions however, isn’t always so right or wrong. The gatekeeping element in my opinion is about what the public interacts with prior to getting to the entrance of their next path, the next gate if you will. The data analytics Facebook (and many other social media sites) uses the information and incites for things like ads, news articles, and other interactive sources. Simply put “algorithmic gatekeeping” is the process of getting from one interaction to the doorstep of the next. What we (the public) interact with initially is going to lead us to the next interaction, whether we realize our next “like” has any impact on it or not.
In the context of the article, “agency” or specifically “computational agency” refers to the online, transparent tools that control what users see, i.e. the switch from chronological timelines to ones based on what the user had previously interacted with. Tufekci goes on to support this when he says, “In Facebook’s case, the algorithmic “editing” is dynamic, all but invisible, and individually tailored.”
As public writers, the use of specific coding goes beyond the previously established spectrum of an ad on a given TV show. The advertisement was obviously geared towards its most popular audience and odds are it stopped there. The algorithmic gatekeeping is dynamic, malleable, and gives “gatekeepers” the chance to adapt their codes based on feedback from individuals. With this in mind, the gatekeepers are given the power to sway the public’s access and overall perception of a given event, whether it be an election (as discussed in the article) or a public water crisis.

With this in mind, gatekeeping can be both useful and detrimental to any campaign. In my specific focus on investment in infrastructure, the gatekeeping not only needs to surpass the general public, but needs to find itself on the desk of a private firm in the right location. How will it get there if they don’t realize that infrastructure is at risk? Will a private capital investor in San Diego really care if Pennsylvania has some of the worst bridge structures in the United States? First, my campaign needs to raise the “right” questions, it needs to bring light to the lack of quality infrastructure in the region. With the “right” questions, will come the “right” answers, meaning interest in infrastructure will increase, thus the information will become more and more tailored to users. With the tailoring of information, demographics will begin to play their role and users in the area will be exposed to the information. Once regionally spread, the “right” answers will again raise questions to lead to the final gate: investment. The complication in gatekeeping is finding the “right” niches. There aren’t any right or wrong answers specifically, BUT each click or interaction will steer one direction or another. The steering will either send an investor to the website (or article, ad, etc.) of my investment campaign on infrastructure, or to another page such as land and mineral acquisition.

Algorithmic Gatekeeping: The Social Impacts of Social Media on Public Writing

I have always been extremely intrigued by the concept of algorithmic gatekeeping and how the algorithms that social media platforms use manipulate the opinions and actions of our society. I have been on Facebook for about half of my life; I created my Facebook page when I was 13 years old, and at that time my mother’s biggest concern was creepy old men trying to friend me and kidnap me. However, her concerns were never about how my interactions and decisions on Facebook would contribute to a massive data set that could manipulate my thoughts, political opinions, or even my mood.

Tufekci discusses the term “algorithmic gatekeeping” as “the process by which such non-transparent algorithmic computational-tools dynamically filter, highlight, suppress, or otherwise play an editorial role—fully or partially—in determining: information flows through online platforms and similar media; human-resources processes (such as hiring and firing); flag potential terrorists; and more” (207-208). In terms of transparency, Tufekci argues that social media differs from that of traditional media (print newspapers, and television broadcasts) because there is no “visible” bias or agenda presented. When you are watching Fox News or CNN, you can see the biases that these corporations have based on their contradicting messages of similar stories that can be seen by anyone. However, when you are on Facebook and subjected to similar bias, you can’t really see it because you just think that your algorithmically created New Feed looks just like anyone else’s, except for the content. We are all looking at the “same” website, but the content presented to each of us is different based on our biases. This concept is called the Filter Bubble, and it is an incredibly important concept when discussing the impact of algorithmic gatekeeping. The Filter Bubble is basically the concept that when we are on social media platforms, we are only projected content that is compatible with the content that the algorithms believe will be most appealing to us. This is troublesome because when you are in the Filter Bubble you will rarely see information from other sides of the same discussion, which just reinforces personal biases, and does not allow for open public discourse and dissemination of ideas and information.

There are a lot of practical implications for algorithmic gatekeeping in terms of public writing. The most important of which is the crowdsourcing information that allows for Facebook and other social media platforms to understand the demographic and psychographic information of users. Tefucki states that in a study done of 58,000 individuals, Facebook was able to understand “the traits modeled—often with eighty to ninety percent accuracy—included “sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender” among others”(210). This information would be incredibly helpful to writers who knew the demographics/psychographics of people who interact with their content. A Huffington Post writer could actively construct articles based on the information of users that tend to interact with their posts more. The same goes for any other publication that would like to interact with this information.

Along with this, if writers understand that there is a higher chance of people with similar ideologies reading their content, they could bolster the information in a more biased way to appeal more to readers. Rather than worrying about the backlash from the “other side”, writers can use more powerful rhetoric to support their side. This is a huge reason as to why CNN and Fox News have such a heated rivalry. People do not want to actively seek concessions to their arguments to understand the full picture of topics; they would rather have the information support their opinions and ideas. This is also why platforms like Facebook have so much political hostility associated with them. No one wants to be told that they are wrong, and with the increasingly divisive rhetoric being used in journalism today, article are more likely to “trigger” people with opposing views.

Algorithmic Gatekeeping

I think Tufekci considers Algorithmic Gatekeeping as ways to release information to the public. On Facebook, you could easily release information that people may not know yet. Either as journalists or just as the general public of Facebook, you could easily dusucss to all of your friends or followers information that they may not know yet and start a conversation. Computational Agency it seems is how you are able to be the gatekeeper or how you can work your power to make sure that the role of algorithmic gatekeeper is easy for you. Another way of thinking of Computational Agency, Tufekci is said to “imagine that your service provider has tasked your smartphone – armed with detailed information about you- with keeping you engaged in the conversation (207).” This compares agency to someones ability to get you to join their gatekeeping conversation. Your advertisements are tailored to what you like and buy. With that they keep you interested in their content and their public writing while using their agency to be a gatekeeper.

As public writers, I think it is super important to think of algorithms to the secret of success with spreading your idea, your call to action, or your general public vision. Gatekeeping can even just be a picture on your Facebook of a new store in town. This gives your audience the ability to see what is new and have them discuss it with other people in their city. This algorithm is how advertisements utilize their public message and show themselves in the chain of public and local news. As Tafikci discusses, it is important to take in who your demographic and target audience is when you open the algorithmic gatekeeping. This way you can have an idea on how far your content will go. As I said previously, posting a picture of a new store will most likely open the conversation for people who live in the new stores neighborhood and surrounding areas.

Using Facebook for such news breaking may sometimes be difficult due to how many different responses your audience may have. Some may share your news continuing an open online conversation. Other times, your audience may just press the like button not giving you any concrete evidence that they have continued the conversation. They may have spoken to their friends and family about the content in person, leaving those people with that new information the ability to continue the conversation or break the barrier in that chain. Comments can continue conversations at the scene giving your target audience the ability to discuss with others in the target audience of the post but not in their own follower or friend space. Overall, I think that Algorithmic Gatekeeping is a really important concept for public writing since it opens up so many doors for yourself or others to discuss new things. Agency is also an important idea since it incorporates the how. The how is one of the most important ways of spreading information. You must think of who your audience is and decide HOW you will word your information, or what you will include, or how to present your content with languages. It is the basic steps of Algorithmic Gatekeeping.

Implications of Algorithmic Gatekeeping

Tufecki describes gatekeeping as “the process by which such non-transparent algorithmic computational-tools dynamically filter, highlight, suppress, or otherwise play an editorial role–fully or partially”. Put rather simply, “algorithmic gatekeeping” is that it is a process that decides what information gets to go past the “gate” and be seen. This process is much more complex than this, but in its most basic form, this is what algorithmic gatekeeping does. The “gate” varies depending on the situation; Tufekci’s main example of algorithmic gatekeeping being utilized is Facebook. The gate in this instance is people’s newsfeeds. Facebook uses algorithmic gatekeeping to determine what content to show on people’s newsfeeds, based off an algorithm that decides what content is most relevant to that person.

Posting work online adds another element for public writers to consider. As public writers, algorithmic gatekeeping should be kept in mind when creating our work because it can affect the audience of the work greatly. Because algorithmic gatekeeping uses information known about a person in order to decide what content should be displayed, this process profiles people. This means that we, as the writers, don’t have all the control when thinking about our audiences. Public writers always have a specific audience in mind when writing and of course, there will be a secondary audience, but algorithms can alter who sees the information, maybe deviating from the intended audience.

One of my campaign pieces included various posts on a Facebook page. Algorithmic gatekeeping could benefit my campaign piece. The algorithm might find people who have mental health issues based off their posts and “likes” of other pages and my page might come up as a recommended page. In this instance, my information will be reaching further than I could do by sharing my page in different groups and whatnot. Further, it is doing the work of circulating the information for me. However, algorithms could limit who sees the page, thus altering my targeted audience. While teens struggling with mental health issues are my main audience, parents of these teens are too. Algorithmic gatekeeping might not decide to show my page to parents. Further, if Facebook doesn’t have any information or indication that parents have teens battling mental health, then Facebook is very unlikely to show my page to these parents. Therefore, my targeted audience isn’t being reached.

While my campaign piece is just one small example of how algorithmic gatekeeping can either help or hinder, each time gatekeeping is used, it can either help or hinder. The problem with this is that it is not known, for each instance, if it will be a good thing to do or not. Gatekeeping doesn’t always allow the “correct” or “right” information through. Instead, what information passes on is based off subjective decision making, according to Tufekci. Should algorithmic gatekeeping be allowed, knowing that there are consequences to it? Should we just accept the good and the bad that comes from gatekeeping? Time will tell if lawmakers will make decisions about the ethicalness of algorithmic gatekeeping. With many instances in the news currently about the ethicalness of algorithmic gatekeeping, lawmakers may be pressured to enact new laws.

Gatekeeping and its Concerns

Immediately upon hearing the term, gatekeeping, I formed a mere visualization of the topic. I thought of a “gatekeeper”, whom stood to the side of a gate, and controlled which individuals came in/came out. The gatekeeper could allow individuals in if they met certain criteria (i.e. had a badge) or he was familiar with the face. In comparison, algorithmic gatekeeping is an emerging topic that allows the inflow/outflow of information dictated by computational processes. These processes are based on extensive research about a particular individual/demographic/group. Most of the time, the algorithm is private, therefore it is controlled by the gatekeeping process if people can view the content. This is where Facebook had the pressing issue – where people were upset their personal information was being utilized in extensive research without their consent. However, many social media platforms run on the idea of algorithmic gatekeeping. Instagram has transitioned from the atypical chronological order of material shown, and has recently upgraded to show a user pictures based on the number of likes, who they tend to like, etc. In addition, ads have been added to the mainstream and typically are correlated to the demographic of a user (i.e. I am a female college student who sees ads for Forever21 and American Eagle). Computational agency is derived from algorithmic gatekeeping, which describes the capacity an individual has to make decisions upon algorithms being in place. From the article, it discussed the influence of Facebook on the 2010 election where it “could alter the U.S. electoral turnout by hundreds of thousands of votes, merely by nudging people to vote through…messages.” Facebook used its algorithm to choose a candidate and try to make them appeal to the masses by a strong, potent social message.

Much of an individual’s goal in writing is to appeal to an audience. If your piece is very broad, you want your piece to reach many people and make them consider what you are trying to discuss. However, if you are trying to reach a specific audience, algorithmic gatekeeping can accomplish the task. The gatekeeper can target specific individuals based on a certain demographic/trait and then filter the information to them.

In relation to my group’s campaign, we currently are creating a Facebook page that incorporates various articles, videos, and posts on mental health. Algorithmic gatekeeping could be of use to us, where we could strategize a way to get college students from Pitt to view our ideas on the topic of mental health. Upon receiving enough views and appeal, we could make the campaign broader and reach out to college students as a whole. Although, with all the controversy surrounding the Facebook scandal, we may want to refrain from such an idea. Not only is there a fine line of utilizing personal information, but the question at hand is if it is morally right to take a user’s information without receiving consent. For us on a small scale, it would not be worth it to attempt this idea. For big companies like Amazon and Google, it might be imperative as their goal is to appeal to a consumer and make a sale. I would be curious in the coming year to find how they approached this topic and if they found the happy-medium of the gatekeeping ideology.

The Power of Algorithms

Tufekci explains algorithmic gatekeeping thoroughly: “Algorithmic gatekeeping is the process by which such non-transparent algorithmic computational-tools dynamically filter, highlight, suppress, or otherwise play an editorial role–fully or partially–in determining: information flows through online platforms and similar media; human resources processes (such as hiring and firing); flag potential terrorists; and more” (Tufekci 208).

In terms of social media, I understand this as complicated computer algorithms working together to affect what you see. It can be to hide or to promote certain things. It works together with what Tufekci describes as “big data” or this large data profile created over time through all your interactions on the web. This could explain why when you go online shopping for shoes, you might later see a Facebook ad for shoes. Or, it could be used to create a sort of social bubble, where Facebook only shows you the posts from friends whose messages politically align with yours. It could be to promote different sources; someone who constantly reads Huffpost or Fox might get offered other politically-leaning sources.

I think the negatives of this are vast and quite scary in terms of how much information accumulates about you and how much the internet can know about you, like using Facebook “like” data to predict your traits (210). But I think there can be positives. If an internet source can know exactly what you’re interested in and looking for, it might be able to better present you with the information you’re looking for. Or, on a deeper level, if technology can understand so much about you, it could perhaps calculate how different governmental policies might affect your life. I think the potential is huge–both in a dangerous and positive way. The question becomes, then, is it ethical to create and manipulate information in a way that can be so powerful?

In our campaign, we are creating a website. Each piece created is being uploaded to this website. If we promote this on social media, algorithms might determine to whom our website or campaign pieces might be of interest. It might then put it somewhere in their news feed. If they like it, more work like ours could pop up. Or perhaps their Facebook friends might be alerted to it. Apparently, it could be placed with other posts with similar messages to try to influence moods, “likes,” and/or posts. On the other hand, the algorithms might deem our work irrelevant and keep it out of newsfeeds. Or, it could promote our work into newsfeeds where people might try to discredit our work. As Tufekci describes, these algorithms are complex, and it is not entirely clear what their prime focus is, or that they have one. It might depend on who/what is running the algorithms and those people/that corporations interest in using said algorithms.

Although Facebook is a huge collector of data, I don’t necessarily think that they intend to use this data malevolently, but instead perhaps for efficiency and/or economic gain: Facebook’s goal is to keep users scrolling through the page. The more scrolling means more posts and ads, clicking on or “liking” those posts or ads leads to more data collected about you. The more data Facebook has about you, the more they can tailor content toward you that you are interested in, and hopefully the more you scroll.

 

 

Algorithms, timing and changing the game

Last week around this time, I heard about the Cambridge Analytica scandal which now I think is almost hilariously ironic given this reading about algorithmic harms. Before I start stressing how the scandal showcases both positive and negative manipulations, I think it’s important to understand algorithmic gatekeeping and agency.

 

When Tufekci talks about gatekeeping, I think of it in a literal sense. With a gatekeeper, you have someone that controls whether or not someone/something comes in or whether it comes out. This gatekeeping could be part of a familiar process with someone that typically comes and goes, or it could be an unexpected visitor which then the gatekeeper has to decide whether or not to allow someone in. But unlike people, algorithmic gatekeeping is the flow of information in and out, and it’s decided by computational processes. This algorithm is usually hidden from the public which makes the flow of information much more complex. With algorithmic gatekeeping, it can’t necessarily decide what’s right and what’s wrong. It only corrects where the information goes, or revises what information is released, or who can see what information in order to manipulate data and people’s thoughts. Algorithmic gatekeeping in the generic sense is controlling what content people see, but generally is computed in a set way for large groups of people. Instagram is a great example considering gatekeeping, because rather than keeping information chronological, the platform has a specific algorithm to help get information out that a person might typically miss.

 

Computational agency also derives from algorithmic gatekeeping, but it specifically focuses on how the information and these algorithms act with outside agencies, this meaning decision making. When I think of agency, this is when the Cambridge Analytica scandal comes to my mind. Although you can think of some algorithms accidentally categorizing information as still being part of agency, another way to look at it is through the consulting firm. Facebook gave access to Cambridge Analytica through a quiz that was a Facebook app. Through this app, Cambridge Analytica then took its research and used it to help the Trump campaign. Through data manipulation, it may have been able to influence how people voted. I felt this was relevant when in an undercover video C.A. bragged that it created the “Crooked Hillary” campaign that it spread throughout Facebook.

 

Tufekci brings up some interesting points about algorithmic editing in public writing. More people are reading things online, so this could mean that information reaches more people. However, with so many online platforms, maybe social media makes information reach fewer audiences because it is more narrowed depending on what people are interested in, or what algorithms decide to show people first. I think depending on what message is being spread, public writing being posted online versus being posted in a newspaper or appearing on television can be received differently. For example, “Flat-earth believers” I feel are received better online than in a print format. As Tufekci points out, pages that are printed the same way for mass amounts of people to read are easier to challenge. If you only know one thing, and that is that the earth is round, one reading is not going to change your opinion. Based on first-hand experience, I’ve seen many people online consider that the earth is flat based on the way information can be manipulated through algorithms.

 

For my first campaign piece that is a still image of what mental health problems exist for a college student, someone might immediately love it or immediately hate it. Through my second campaign piece that deals with audio however, it could be receive in many different ways. I feel like if I posted my audio piece online, some words I say could be accepted as fact. If Forbes Magazine posted my work and praised it, it would gain more legitimacy. If Pitt posted it, it would create a large audience since the information I am giving pertains to that specific audience. However, if Fox News posted the same information, my message could be restricted due to the large conservative audience that it has. As a public writer, I will have to consider the content of my campaign piece, and how different platforms might aid or restrict my message. Algorithmic gatekeeping changes the game when writing for the public, because now as a writer, people must consider how they can beat the game or win against an algorithm/computer system so intelligent it can tell more about the reader/writer than they know about themselves.

Algorithmic Harms

If I am being honest, as someone who didn’t grow up with computers, this is all a little over my head. It scares me  to death and interests me all at the same time. I can see the value in algorithms, with my campaign, and with every day life. It helps to advertise possible “sells” based on the consumer’s likes and dislikes. It is a genius way to get your information and or product out to the people who may be interested. If I understand public activity on the internet, as a producer of a product, whatever that may be, you can pay to make your product information more accessible above all the others.  That way if some one is searching for the same product or information that you have to offer. you may have the first one at the chance to “sell” it.

This approach could be beneficial to my campaign. These algorithms could make the information more readily available to the public. I can think of a handful of people, maybe even a small group that would be interested in the information I have to share. But this process would open up the audience on a grander scale.

On the flip side of all of this, it makes me a little more than pissed that some one is using my information to decide what ads, products or information I should be viewing. That should be a decision I make for myself. I would like to think that my rights are better protected than that. Unfortunately, with out computers around when we were considering and defining the parameters of our personal rights, how could we prepare for this. How legal is it to use our personal information? More importantly, how ethical is it? It is definitely something that needs more consideration on everyone’s part.

 

Writing for the Algorithms & Hoping it Reaches the Public

Algorithms are computer processes that are used to make decisions. They are described as not providing the “correct” answers, but they yield results that are believed to be the “best.” How can a program, such as Facebook’s algorithm, determine what is best for the millions of users? While yes, it is personalized, it seems that the use of algorithms crosses a line.

In relation to the article, algorithmic gatekeeping is a computer program that determines what information is relevant enough to be promoted to users. Tufekci described the case of Facebook keeping the Ferguson protests under the radar while promoting the Ice Bucket Challenge. In my opinion the algorithmic gatekeeping used in this instance is unjust, for the population should get to determine what current event to publicize, not a computer program.

Tufekci use of agency seems to use the computers as a facilitator between users and the information presented to them. Actually, facilitator may be too mild of a description, for it also manipulates the relationship between the two. This brings us back to algorithms’ purpose of providing the “best” results, but if we are not given any other options, how are we to determine what is truly the best? In addition, these agents work undercover, for if each user has a different experience, it is extremely difficult to pinpoint what each user is seeing and its effects.

In public writing, specifically online, algorithms can be a roadblock. Your publications may not be promoted, even to those who follow or subscribe to you in the first place. To prevent your work from getting buried, you could post to multiple platforms, or to ones that do not use algorithms. For example, posting on Twitter rather than Facebook. On the other hand, algorisms could work in your favor, if you fit the criteria for promotion.

Regarding my own campaign, algorithmic gatekeeping seems inhibiting. I created a Facebook page discussing the opioid epidemic. If I were to promote it, my first plan would to be publicizing it on my personal Facebook page, through wall posts and in various groups I am a part of. I could try and self-promote to my friends, who would hopefully continue promoting, but algorithmic gatekeeping might keep them from even seeing my page to begin with. In addition, the overall goal of my campaign is to make EVERYONE aware of opioid addiction, how prescription opioids contribute to the crisis, and how to help if you or someone you know is addicted. Algorithmic gatekeeping could severely hinder me reaching my entire audience by only promoting my page to those who previous showed interesting in the opioid epidemic. Similar to the anecdote in the article where the teenage girl received Target ads based on what she had been shopping for, I do not want my page to be promoted to only those who had been “shopping” for my topic. While any promotion would help, we want those who know little to nothing about the crisis to become educated more than those who already have a background on the topic.

 

 

Algorithms and Their Impact

In today’s society, algorithms of services and platforms is a familiar and ubiquitous topic. These type of algorithms, which are uniquely used to try to find the “best” answer, not necessarily the single unequivocally correct answer, are found in many of the things we use the most in our everyday lives. For example, Instagram made a well documented (but not always well received) change from a chronological feed to one with specially curated posts, in an attempt to show users posts they’d be most interested in first. Home devices, such as the Google Home and the Amazon Echo “Alexa,” use algorithms to gather information and best answer users’ inquiries. And perhaps most notably, Facebook has faced serious questions about whether it has allowed user’s information to be used to feed them specifically targeted posts to influence the 2016 Presidential Election. Though the details make the situation slightly different, this is very similar to the election influence that Tufecki suggested could happen using Facebook’s algorithm in 2015.

In the context of Tufecki’s article, “algorithmic gatekeeping” essentially  means programming algorithms to decide what content a user sees. This is typically a relatively complex solution to show them content that they have shown interest in, content they might engage with, or content that the algorithm’s designer wants them to see. “Agency,” meanwhile, is a somewhat philosophical term to describe the capacity to make “decisions” that these algorithms are given. These concepts are relevant to our work as public writers because of their grasp of the target audience.

So much of our goal in writing for the public is to appeal to our audience. Essentially, this is what algorithms can accomplish – except instead of creating just one piece of content and trying to make it appeal to as much of the audience as possible, the way a “human gatekeeper,” if you will, might have to, they can target specific individuals with content based on their interests.

For my campaign, this could be used to try to determine what kinds of infrastructure users might be most interested in. For example, in parts of Pittsburgh, as well as a few other notable areas, drinking water issues are particularly relevant to residents’ lives. These users could be delivered a drinking-water related infrastructure post to draw them in to the overall cause. For users aged 18 or under, perhaps public school repairs would be the most interesting infrastructure topic. The list of possibilities goes on and on, which is perfectly fine for an algorithm. Going through copious amounts of information, clauses, and conditions is their specialty; tasks that would take a human gatekeeper hours if not days.

There is a fine line to be walked, here, however. How moral is it to use user’s information to, in effect, sell them something (whether literally selling in a monetary sense, or figuratively selling them on a cause)? Can companies be trusted to use this type of information responsibly? These are some of the dilemmas that corporations like Facebook, Google, Amazon, and more are facing today. Perhaps more importantly, they are the dilemmas that we may someday face in trying to best write for the public.