How Family Affected My Public Interests

When I’m instructed to think about things that are meaningful to me, the very first thing that comes to mind is family–especially my mother, father, and brother. Those three people have done so much for me; they’ve given me everything they possibly can, despite a myriad of challenges they’ve faced themselves. Each of them has had to overcome numerous obstacles to get to where they wanted to be, and to help me get to where I wanted to be, too. Honestly, my family is a topic that I could talk (or write) about all day long.  I would write about every last detail if I could, but that’s not the topic at hand here. Instead I’m going to write about some of the challenges they have faced, and why they’re in the public interest. To do so, however, I’m going to need to provide a bit of background and context.

 

My mother is an established and accomplished employee of the federal government. At work, she is often surrounded by doctors who have not just an M.D. or a PhD but both (many of whom were appointed or even hand-picked for their role by whomever the President is at the time). Getting there, though, was by no means easy for her. She wasn’t born into a family with money, a family with many impressive job titles, or a family with a long string of legacy degrees from prestigious colleges. She was just a little multiracial girl from The Bronx who grew up attending New York City public schools. Some of her closest friends were her siblings and her cousins (many of whom grew up in the housing projects of the Bronx). Her father at times worked three jobs to support the family, her mother one (at a time where it was far from the standard for a woman to be working). My mother was more than halfway to adulthood before the Civil Rights Act of 1964, which outlawed discrimination based on race, color, sex, nationality, and religion, was signed. She doesn’t like to talk much about the difficulties of growing up as a black girl in times that discriminated against her for both her gender and race, but everything she has overcome has nonetheless taught me the importance of intersectional feminism and combating racism.

 

My father’s upbringing was in some ways similar to my mother’s, and in other ways very different. But the most notable life lesson I ever learned from my father did not come from his experiences growing up; instead, it came much more recently. It began in April 2015 when I found out that he had been diagnosed with cancer. Since that day, the way he has fought through pain, fatigue, nausea, and more without a single complaint has inspired me. It has taught me to always look on the bright side of things, to be grateful for every day of life I am granted, and to be even more grateful for every memory and every moment I get to spend with him. Perhaps most of all, it’s made me realize how much I have to grow up, and how much I still rely on my father. Someday, many of the tasks and responsibilities he currently takes on, I will inherit. The more I think about that, the more I realize that I am far from prepared for that.

 

It is often said that cancer has an impact on all of our lives at some point. I’d be willing to bet that every member of our class can name somebody close to them — a friend, family member, classmate, teammate, etc. — who has had cancer. But I don’t think it has to be that way. That’s why I feel it is important to Stand Up To Cancersupporting cancer research, helping out with awareness, and more.

 

As important as it is to me to find a cure to cancer, I think it is even more to me to work towards a more socially and economically just America. Our society still disproportionately favors certain demographics; certain groups benefit from great privilege, while others face oppression (both to varying degrees). Institutional racism still exists. Sexism still exists. I know these statements to be true, but I also know that gaining more knowledge about the topics would help me improve at fighting against these injustices. I don’t know if it is possible to ever “end” racism or sexism in America, but I do know in my heart that it is worth trying.

Hmmmm, what to write about?

As a pre-med student and hopeful future doctor, I would say a lot of the topics and potentially controversial isses that interest me are related to medicine and science. Two of the first topics that come to mind are public science education (especially relating to the safety and importance of vaccines) and the american approach to care for the elderly, I think both of these are tremendously important and  high stakes topics of discussion. The topic of the anti-vacc movement in particular is an issue which is caused by a lack of effective public communication.

However, the anti-vacc movement, like (dare say it) the discussion on global warming, is (or at least should be) uncontroversial. Many other topics which interest me, such as the prospect of legalizing prostitution, the debate on abortion, and dialogue regarding gun control, have a very clear and legitimate two sides. While I certainly agree with and align myself with one side, I must recognize the validity of the opposing view. This is not the case with the anti-vaccine movement. While there are two sides to this argument in the public sphere, most people who choose to not vaccinate their children are basing their decision on false information and therefore are unarguably in the wrong.

The question then arises does valid controversy make a topic more interesting or simply more difficult to present? Another rather non controversial, but still important comes to mind: the ivory trade. I would hope that no one would argue with me that killing elephants to the point of near extinction for the harvest of their ivory is a GOOD choice. Yet the trade and illegal poaching continues because the sale of ivory is legal in countries such as China. Fortunately, China recently banned the ivory trade completely which is the first of many steps toward reviving elephant populations throughout Africa. This sudden change in China’s tune was inspired by heavy international pressures which were fueled by public knowledge and passion for the cause. Thus this rather non-controversial topic proves to be nonetheless high stakes and very relevant to communication with the public.

The same is true of Voluntourism. For those who don’t know, this word refers to the global trend of  international volunteering and its portrayal throughout western social media which tends to convey racism and western-centric development ideals. This issue is one that also is less controversial and more propagated by lack of knowledge and education. I would assume that most people who participate in this sort of act truly do intend to help but aren’t realizing the negative consequences that they are creating. The same is true of flawed NGO’s and international development organizations and those who contribute to them.

I would say that less controversial topics (like those I mentioned above) are equally as interesting as highly controversial topics because they still have an aspect to them that keeps them in the public eye, allowing them to retain their relevance. The anti-vaccine movement keeps coming to my mind because of its high stakes nature and surprisingly extensive following. I am interested to see if I may come across any valid arguments FOR the anti vaccine movement but I would suspect that most arguments for this movement are based on pseudo science such as the publication which linked vaccinations to autism.  This misinformation and lack of understanding of how vaccines word  lead to the emotionally charged movements against vaccines. A campaign to correct these misunderstandings could be useful for public science education.

 

Filtered News

To me algorithmic gatekeeping should be illegal. Tufekci (2015) describes algorithmic gatekeeping as, “play an editorial role—fully or partially—in determining: information flows through online platforms and similar media; human-resources processes.” After reading through this article, Tufekci guides me to define algorithmic gatekeeping as filtering information for specific people to push someone else’s agenda. On a small-scale level this may not be a big deal or impact you in anyway. An example would be filtering your social media feeds so that you are seeing funny dog clips. I think everyone would be fine with this type of manipulation. I believe when it comes to more serious news and information; algorithmic gatekeeping can be dangerous. For a platform to be able to manipulate these algorithms, they can almost control what you think and believe in. When I am on social media, I usually will not search for information rather just read what is there when I open it. If I just believe and trust the first few things I read as facts, these algorithms are manipulating me into a certain way of thinking.

For people unaware that this is happening, which “sixty-two percent of undergraduates were not aware that Facebook curated users’ News Feeds by algorithm—much less the way in which the algorithm works” (Tufekci, 2015) poses a problem. If people are unaware their news is being filtered for them, they may just believe the first few things they come across. “In addition, the Facebook study showed that Facebook is able to induce mood changes” (Tufekci, 2015), this can pose a problem if used in an incorrect way. If the algorithms are always aimed in the right direction this could be a great way to benefit people but due to the complexity and constant changes of algorithms this is hard to control. “In another example, researchers were able to identify people with a high likelihood of lapsing into depression before the onset of their clinical symptoms” (Tufekci, 2015) by being able to identify and help in these ways it could really benefit people. Unfortunately, I do not believe this is how algorithmic gatekeeping will be used in the future.

My beliefs are backed by the examples of Ferguson, the 2010 election, and a hiring algorithm. By the algorithm deciding the Ferguson story was not “relevant” enough to bring to more people’s attention sooner, it was almost as if it had its own agenda or reasoning to hide this story. The entire country was talking about this but yet it still wasn’t “relevant” enough. For the 2010 election, which millions of people were experimented on without their knowledge, showed a possibility in swinging votes one way or another. If a hiring algorithm can possibly discriminate based on race or belief, you must ask yourself what else can a human make an algorithm do and what can an algorithm make a human do?

With the use of algorithmic gatekeeping, my campaign of structurally deficient bridges could be shown on news feeds of people who were never looking for it. People who have liked certain articles pertaining somewhat to this topic could have my campaign pop up for that person to read. I believe gatekeeping would restrict my campaign more than help. I don’t think there are many people who are interested in this topic, making it more difficult for my campaign to be filtered into many people’s timelines.

Surveillance is privacy. Algorisms know best. Facebook is Friend.

Surveillance is privacy. Algorisms know best. Facebook is Friend.

It’s hard to tell which dystopian novel we’ve wondered into recently.  We’re somewhere between Orwell’s 1984 (Big Brother could be the NSA or Facebook or Google?), Atwood’s A Handmaid’s Tale (so long reproductive rights!), a dash of Veronica Roth’s Divergent (Factions divided sounds like a familiar narrative), and hopefully not quite at Hunger Games levels.

Storytelling is as historically old as hominins. Constructing our own narratives is a biological drive according to Darwin’s theory of sexual selection (as opposed to his idea of natural selection). Facebook gives us a prefect platform to shout our stories from the proverbial mountaintop. “Here I am! Pay attention to me! What I’m saying is unique and important!” Then we sit back and count the likes, the loves, the smiley faces, the validation that someone cares.

Facebook is designed with this need in mind. Every week a new update rules out that knows us better, is able to predict what we want, shows us certain friends and not others, and caters ads to us specifically. This is what “algorithmic gatekeeping” means. Facebook is the lens in which we view our social media lives, we are subject to Facebook bias, and we are blind to Facebook’s control. We may be the head, but Facebook is the neck.

The author explained the concept of gatekeeping in the following excerpt: “In this analogy, your phone would algorithmically manipulate who you heard from, which sentences you heard, and in what order you heard them—keeping you on the phone longer, and thus successfully serving you more ads”. Facebook’s algorithms show you what they want you to see for whatever purpose they want to and you have no way of knowing the degree of this manipulation.

The internet is the last frontier, the modern Wild, Wild West. Laws, ethics, and oversight are playing catchup to the climate of social media. Events like “gamergate” in which “trolls” threatened self-identified female gamers, incidents of revenge porn and general bullying, hate speech and culturally insensitive remarks have received public backlash with critics called for social media companies to better monitor the content that users share. The question remains, to what degree should a social media company censor content or users? Public backlash has lead to debates about how much social media and the consequences of a person’s actions on those sites should bleed over into a person’s “real life”. Social media lifts up idols to watch them crash. Last election, Ken Bone, a bespeckled man in red sweater has hailed a hero, only to have internet history dug up and the public tide to turn on discovery that he was not as innocent as his visage alluded.

Facebook seems to be following the “don’t ask for permission, ask for forgiveness later” mantra. According to the article, Facebook revealed after the 2010 U.S. election that it ‘promoted’ voting by ‘nudging’ people to vote by suggesting that their friends had also voted. Encouraging civic engagement is fairly innocuous. Hiding events like the Ferguson protests could be harmful and suggests that Facebook algorithms suffer from the same racial bias as its creators.

For any group creating campaign pieces for social justice issues, especially if those issues are intersectional with structural racism and classism, it will be important to remember the example of Ferguson. Facebook’s algorithm will likely filter out your content. At this time, it is hard to determine how many people will see your content. Put a cheery spin on a social justice issue and you might just sneak past the algorithm.

Until Facebook is held to some measure of accountability and transparency by an oversight organization (possiblly the FCC), we will be ruled by the almighty algorithm.

There is still hope. The algorithm isn’t perfect. An artist’s rendition of a robin on a Christmas card was recently flagged as “indecent”. The controversy has now given her more coverage than her post would have originally received. You can decide for yourself here:

https://www.theguardian.com/technology/2017/nov/12/artists-sexual-robin-redbreast-christmas-cards-banned-by-facebook?CMP=fb_gu

For additional information on our evolutionary history with storytelling, go here:

https://www.theatlantic.com/entertainment/archive/2013/09/the-evolutionary-case-for-great-fiction/279311/

For more information on how social media and smartphones are turning you into a validation zombie:

https://www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/

For an more about racial bias in algorithms:

https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

Surveillance is privacy. Algorisms know best. Facebook is Friend.

Surveillance is privacy. Algorisms know best. Facebook is Friend.

It’s hard to tell which dystopian novel we’ve wondered into recently.  We’re somewhere between Orwell’s 1984 (Big Brother could be the NSA or Facebook or Google?), Atwood’s A Handmaid’s Tale (so long reproductive rights!), a dash of Veronica Roth’s Divergent (Factions divided sounds like a familiar narrative), and hopefully not quite at Hunger Games levels.

Storytelling is as historically old as hominins. Constructing our own narratives is a biological drive according to Darwin’s theory of sexual selection (as opposed to his idea of natural selection). Facebook gives us a prefect platform to shout our stories from the proverbial mountaintop. “Here I am! Pay attention to me! What I’m saying is unique and important!” Then we sit back and count the likes, the loves, the smiley faces, the validation that someone cares.

Facebook is designed with this need in mind. Every week a new update comes out that knows us better, is able to predict what we want, shows us certain friends and not others, and caters ads to us specifically. This is what “algorithmic gatekeeping” means. Facebook is the lens in which we view our social media lives, we are subject to Facebook bias, and we are blind to Facebook’s control. We may be the head, but Facebook is the neck.

The author explained the concept of gatekeeping in the following excerpt: “In this analogy, your phone would algorithmically manipulate who you heard from, which sentences you heard, and in what order you heard them—keeping you on the phone longer, and thus successfully serving you more ads”. Facebook’s algorithms show you what they want you to see for whatever purpose they want to and you have no way of knowing the degree of this manipulation.

The internet is the last frontier, the modern Wild, Wild West. Laws, ethics, and oversight are playing catchup to the climate of social media. Events like “gamergate” in which “trolls” threatened self-identified female gamers, incidents of revenge porn and general bullying, hate speech and culturally insensitive remarks have received public backlash with critics calling for social media companies to better monitor the content that users share. The question remains, to what degree should a social media company censor content or users? Public backlash has lead to debates about how much social media and the consequences of a person’s actions on those sites should bleed over into a person’s “real life”. Social media lifts up idols to watch them crash. Last election, Ken Bone, a bespeckled man in red sweater has hailed a hero, only to have his internet history dug up and the public tide to turn on discovery that he was not as innocent as his visage alluded.

Facebook seems to be following the “don’t ask for permission, ask for forgiveness later” mantra. According to the article, Facebook revealed after the 2010 U.S. election that it ‘promoted’ voting by ‘nudging’ people to vote by suggesting that their friends had also voted. Encouraging civic engagement is fairly innocuous. Hiding events like the Ferguson protests could be harmful and suggests that Facebook algorithms suffer from the same racial bias as its creators.

For any group creating campaign pieces for social justice issues, especially if those issues are intersectional with structural racism and classism, it will be important to remember the example of Ferguson. Facebook’s algorithm will likely filter out your content. At this time, it is hard to determine how many people will see your content. Put a cheery spin on a social justice issue and you might just sneak past the algorithm.

Until Facebook is held to some measure of accountability and transparency by an oversight organization (possiblly the FCC), we will be ruled by the almighty algorithm.

There is still hope. The algorithm isn’t perfect. An artist’s rendition of a robin on a Christmas card was recently flagged as “indecent”. The controversy has now given her more coverage than her post would have originally received. You can decide for yourself here:

https://www.theguardian.com/technology/2017/nov/12/artists-sexual-robin-redbreast-christmas-cards-banned-by-facebook?CMP=fb_gu

For additional information on our evolutionary history with storytelling, go here:

https://www.theatlantic.com/entertainment/archive/2013/09/the-evolutionary-case-for-great-fiction/279311/

For more information on how social media and smartphones are turning you into a validation zombie:

https://www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/

For more information about racial bias in algorithms:

https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

Algorithmic Gatekeeping and Advertising on Facebook

Based on Tufekci’s description, I perceived “algorithmic gatekeeping” as the way some online media sources determine who sees what, when they see it, and how they see it.  When hearing of this term, I immediately thought of my own experiences with these sort of instances.  For example, when looking for flights for an upcoming trip, I explored routes through different airlines in order to find one of reasonable duration and price.  It is important to note that I only searched for this on my computer.  However, when scrolling through Facebook on my phone, I came across multiple advertisements from a certain airline concerning the same trip I was searching for on my laptop.  This could certainly not have been just a coincidence.  Now, whether this is illegal, intrusive data mining, or simply smart marketing strategy is subjective.  This seems somewhat similar to what Target was doing to the father’s daughter in the story told by Tufekci.

Now, although advertising is not the only use of “algorithmic gatekeeping,” it is considerably noteworthy.  I feel as if it could certainly be important when promoting an event online.  On Facebook, a user can “sponsor” an item to show up on people’s news feeds with similar interest, proximity to the creator, or other factors.  In my group’s campaign, creating a sponsored post would be beneficial to promoting our event in which a presentation will be held.  However, it would be important to take into account Tufekci’s term of algorithmic gate in order to determine who exactly the post would reach.  This would be essential in gaining the right attention and crowd for our event.  

Going back to the Target example the author used, it would be necessary to make sure that the event goers were mostly commuters, or simply people that travel across one of Pittsburgh’s three rivers almost every day (as our campaign is concerning the poor structure of Pittsburgh’s bridges).   This would mean that the people that are most likely concerned or affected by the goals of our campaign our getting reached.  It would not be efficient to be advertising to, for example, people who live on the South Side and also work on the South Side.

While some of the factors that determine who see the hypothetical sponsored Facebook post are controllable by the creator, others are out of the hands of the social media users.  Facebook has its own interests, as noted by Tufekci.  Due to the fact that Facebook conducts experiments to determine how it can affect decisions users are making, the post may not reach who we want it to.  The algorithmic gatekeeping could certainly prevent the effective use of social media advertisements to advance our campaign.

Of course, it is, for the most part, impossible to determine what exactly Facebook’s (or any other online entity) actions are in terms of algorithmic gatekeeping.  This would make it difficult to determine whether or not creating something such as a sponsored event would be worthwhile.  However, it is possible to look into tendencies that are indicative of how Facebook has used its algorithms in the past.  While not fully effective, researching this could be practical in deciding how or if to post a certain item online.  

Algorithmic Gatekeeping

The first thing I thought while reading Tufekci’s piece on algorithms was: “Wow, I need this when I’m writing my newsletter (first campaign piece).” Although it is pointed out that there is error in algorithms (not unlike human error), I believe that a piece like a newsletter or mass email would benefit in a huge way from some type of computer-generated algorithm. Thinking about my own personal experiences, the number of e-newsletters or articles I receive from organizations or businesses that I subscribe to does not necessarily make me want to stay updated with them, but the ones that I am interested in I always read, and often do something that leads to them profiting because I have done it.

With a school newsletter, I do not really think that any sort of algorithmic system would add anything. Most likely, people who are reading a school newsletter have children in the school or have some other personal stake in the school’s achievements; the people who are receiving it actually want to read it. However, with a newsletter from a company or business like a non-profit who is trying to get word out about their organization could definitely benefit from computed algorithms, since they would be able to focus in on the audience that is going to be the most responsive and therefore provide some sort of benefit to their cause.

When Tufekci uses the term “algorithmic gatekeeping” he is alluding to the idea that these computer-generated algorithms are able to keep certain topics, people, and opinions out, and let others in, therefore subtly shaping simple, everyday decisions in the average social media user’s life. When writing for the public, this could be a game-changer. If you were able to correctly identify what your audience’s views on any given subjects would be, you would be able to tailor the views presented (and the way they were presented) almost perfectly.

Algorithmic gatekeeping, to me, is something that becomes slightly too intelligent when it comes to addressing the public. Although I don’t feel personally endangered by Facebook using my likes and interests to better tailor my newsfeed to me, I don’t really like the idea of it being all run by a computer program that most humans don’t understand. This might be completely wrong, but I believe that the error that comes with having an essentially unpredictable audience when writing for the public is what makes that writing all the more persuasive and comprehensible. With the use of algorithms, the gap for changing peoples’ minds becomes much smaller and therefore prevents the potential shift in opinion for any given individual who makes up the much larger audience that the writer or author might have in mind. Without this room for swaying previous attitudes towards a subject, I think that the author or creator of public writing loses an important part of their initial attempt to produce information in a persuasive way.

When Tufekci explains the example of being able to tell if an individual will vote Democrat or Republican by looking at their likes, interests, activities, and friends on Facebook, I imagine some sort of futuristic scanning of the average human brain, and I feel betrayed, violated, and a little bit scared. He explains, “However, Marketers, political strategists, and similar actors have always engaged in such ‘guessing.’” While I understand this is true, I stand by my point that writing for the public must include some sort of guessing and anticipation for the audience. Although this may be in the future for people who are writing op-eds and school newsletters, I do not believe it needs to be used in everyday life.

 

The Concern and Alarm of Algorithms Online

A gatekeeper, by common definition, is someone who acts as a locked doorway (or gate) between two different parties – typically between a higher source (like a policymaker) and a lower source (like the public), in order to control how much is and isn’t communicated between the two. In Tufekci’s concept of “algorithmic gatekeeping”, he is referencing the process of there being a non-human (algorithm) designed to act as gatekeeper from over the internet. In this form, the gatekeeper is code developed to either display or withhold certain things (like articles or posts) from an individual’s news feed (in terms of Facebook), for whatever reason. The reason, in this sense that is decided by whoever controls the algorithm, is the agency of that person. The agency is similar to a goal, but in terms of algorithms online it may be better to consider it as an influence on your actions based off of your prior decisions/interactions.

The ideas represented in this article about algorithms online that influence our actions are brought about from a public distress of a Facebook study done to test individuals’ interactions based on algorithmic planning. With this, it is important as a public writer to understand that, depending on the agency of the source and who you are trying to reach, publishing online may or may not reach as many as intended or for the purposes it was intended. A huge piece that stuck out to me from this article was about the concern of using algorithms in political elections, and this being written in 2015 about a 2014 issue is incredible considering it then actually happened in 2016. With the presidential elections in 2016, it was revealed that Donald Trump used a private company in England to help him with Facebook algorithms that would display more positive information about himself or less positive information about his opponent in order to sway more voters.

Thinking about things like algorithms for my campaign pieces, I think, is a little unlikely. The only one that I think would be affected online is the press release that I wrote because this would be distributed online nowadays to reach as many people as possible. Though, I would have less concern with this piece since it is only on a local level, there is still the possibility that it could be filtered away from certain individuals’ attention due to the agency of an algorithm if it were against what some company may be trying to accomplish. Typically, those with the most money would hold the most power to create an algorithm to block or redirect people from certain information. Unless my piece were directly opposing some higher agenda of a well-off organization, it seems unlikely that a piece about an upcoming charity event would be affected.

However, it is of more importance to individuals trying to make a difference with less power in the world, I think. If you don’t have the money, you can’t properly reach everyone you may want to. Because of this, it is the individuals with the most noble concerns (about the environment, health, social justice, etc.) that seem to be filtered out by bigger organizations that would be adversely affected by whatever they may try to warn people about.

Good Concepts Need Effective Execution

Delivery and Consideration of Recomposition of material are very important concepts in the digital age. Writers must not only consider the people that their original work will reach and potential secondary audiences, but also the audiences of any reproduction of the piece. Since we live in an age where “users take culture, remix culture, rewrite culture, and thus make culture”, a writer needs to take a “strategic approach to composing for rhetorical delivery” also known as rhetorical velocity. It is important to think about the best ways to communicate the original message, why and who would remix it, and how they would remix it based on the original work.

Good messages are less likely to be heard and spread if the delivery is not executed effectively. Delivery must be viewed on a case by case basis, based upon the mode of communication being used and the message itself. For example it is often necessary on social media or any digital media to keep messages short and attention-grabbing so viewers pay attention to them and do not lose interest too quickly.

One of the main aspects of this article that stood out to me as far as recomposition of information was the discussion of how information is “remixed” through sites like Wikipedia. Throughout my entire academic career I was told not to use sites like Wikipedia for research projects since articles can be edited by the public, therefore making its content potentially untrustworthy. I remember this always used to annoy me because there always seems to be a Wikipedia article for everything. Although I like to believe that most people who would sit down and take the time to add information to a very specific Wikipedia article have solely the intent to assist anyone looking for information, it is important to consider that that any recomposition of information, especially on a page open for anyone to edit, may not be great as a main source of information. I do believe that Wikipedia and similar sites do provide a good basis for anyone new to a topic to begin their research.

In a campaign where our main intent is to spread awareness on our topic, the idea of delivery and how the information we provide will be recomposed must be taken into consideration. A major focus of this paper was how recomposition has taken on a new role in the digital age. People are constantly sharing information which is “much more readily available to mix, mash, and merge.”  Our main issue in our campaign thus far has been how people perceive what our intent is. Throughout and peer review sessions of our campaign pieces or proposal there has been confusion regarding whether we are directly trying to raise money for the water crisis or whether we are concerned solely with educating people on the issue. It is essential that we are clear and consistent about our mission throughout our campaign pieces so that our original message is not lost in translation through any theoretical recompositions by third parties.

One passage that stood out to me most in relation to our campaign was “The medium, or process, of our time—electric technology—is reshaping and restructuring patterns of social interdependence and every aspect of our personal life… Societies have always been shaped more by the nature of the media by which men communicate than by the content of the communication.” Since my first campaign piece is a Facebook page and my second one is a website, I will need to be aware of potential recompositions throughout the revising and creation of these pieces since I am using mediums that are so essential to modern communication.

Rhetorical Velocity: Redo, Reuse, Recycle

“Rhetorical velocity” is a nonsensical term that means ‘the theory of how rhetoric/art/object are remixed, reconstituted, changed in physical and digital spaces over time and distance’. Reading about this concept made me think about memes. Memes are spread quickly, build meaning off of each successive variation and are themselves creates by “cutting and pasting” elements of rhetoric remixed with culture. While a meme itself is usually created to be repurposed and circulated, the original rhetoric may have not been made with those intensions. One meme that has been popular is “Brace Yourself Winter is Coming”. According to “Know Your Meme”, a website that tracks the history and usage of memes, “Brace Yourself Winter is Coming” originated from Game of Thrones from Ned Stark’s family saying, “Winter is Coming”. The image is a screenshot from a scene in the show in which Sean Bean is holding a sword. It is usually changed to “Brace Yourself X is coming” and commonly used for something cyclical in nature going to occur again, such as “Brace Yourself, Pumpkin Spice Season is Coming”, but can also be used in derivative ways. Due to the show’s popularity and easily recognizable, somewhat catchy quotes, and the meme’s ease of mutability, this meme has been popular for a number of years. Memes are built from popular or recognizable objects and transformed into “in-jokes”, social commentary, building relationships, etc. Memes are quick to share, easy to digest and don’t require the attention something like an article would need.

While “Brace Yourself Winter is Coming” has had mostly neutral usage, one meme made national headlines for it’s use by “Alt-right” media and has often been pegged as a racist symbol, Pepe the frog. The creator denounced it’s use and even due a funeral cartoon to “kill” the character. I’ll attach an opinion piece of the use of Pepe and the changing nature of memes at the end of this comment.

I think memes are a useful way to think about “rhetorical velocity” but they are certainly not the only object that is similar. In my poetry classes, we were often assigned to write a poem in response to a painting, sculpture, etc. The poem itself had to stand on it’s own, without relying on it’s inspiration for meaning and context. I think this assignment is a good example of “rhetorical velocity” in action. We remixed an emotive response into a different artwork. The audience was small because the class was small, but shared on a different media, it could have reached a larger audience.

Audience matters when we discuss “rhetorical velocity”. Medium also matters. In the age of the internet, asking the right question opens the right doors. Sometimes a turn of phrase can yield vastly different results on a Google search. I think that one of the lessons that we can glean from this reading is that information on the internet often can take on a life of it’s own.

The materials that we produce from this class, may be found be other individuals or groups. We can harken back to our perusal of the NASA website to find lessons about how information gets re-used, re-mixed and re-package. We theorized that NASA produced the climate change page as a resource for teachers, or students, or curious minds. NASA has a widely-known, reputable name, so the information on their website is assumed by the audience to be scientific, well researched, fact-checked and easy accessible for the audience to digest and repurpose the information.

The documents that we produce for this class do not benefit from name recognition and threrefore, must rely on the audience’s perception of the material. Did we use credible sources to establish ethos? Is our language biased? Is the point of our material coming across to the audience? Does the document have easily accessible “facts” that can be pulled out by an audience member?

I pulled the following passages from our reading, because I believe they hold some “bigger picture” value:

“The medium, or process, of our time—electric technology—is reshaping and restructuring patterns of social interdependence and every aspect of our personal life… Societies have always been shaped more by the nature of the media by which men communicate than by the content of the communication”

 

“delivery can no longer be thought of simply as a technical aspect of public discourse. It must be seen also as ethical and political—a democratic aspiration to devise delivery systems that circulate ideas, information, opinions and knowledge and thereby expand the public forums in which people deliberate on the issues of the day”

 

These remind me of social media, particularly of Facebook, and the importance of ‘sharing’ content in a responsible manner. Facebook has evolved from simply a way to connect with friends to the newest iteration, a more news-focused social sharing site. If we share our class materials, we have the responsibility to check our content for culturally insensitive material or language, be inclusive of people directly impacted by our social justice issues and to accurately portray the issue without relying on stereotypes or misinformation. It is our responsibility to consume media that also meets those standards. With the incessant, addictive nature of social media, it is easy to share content, react to the content produced by others and consume content without digesting it or contextualizing it.

Social media content is important, but our social media use is also important as well. Smart phones bring us immediate validation, every ‘ping’ of a notification makes us more reliant on the next notification. We rely on technology in every aspect of our days and it becomes a funnel for how we consume new information. While we must be cognizant of what we share on social media, we must also be aware of how much we rely on it to find new information.

The content we produce may have a life of it’s own on the internet, or it may never see the light of day. We have a responsibility to produce quality documents and share them wisely. As we’ve seen with memes, we don’t always know where our content will go.

 

https://www.nytimes.com/roomfordebate/2016/10/03/can-a-meme-be-a-hate-symbol-6/internet-memes-are-value-neutral-but-do-reflect-cultural-moments