Filtered News

To me algorithmic gatekeeping should be illegal. Tufekci (2015) describes algorithmic gatekeeping as, “play an editorial role—fully or partially—in determining: information flows through online platforms and similar media; human-resources processes.” After reading through this article, Tufekci guides me to define algorithmic gatekeeping as filtering information for specific people to push someone else’s agenda. On a small-scale level this may not be a big deal or impact you in anyway. An example would be filtering your social media feeds so that you are seeing funny dog clips. I think everyone would be fine with this type of manipulation. I believe when it comes to more serious news and information; algorithmic gatekeeping can be dangerous. For a platform to be able to manipulate these algorithms, they can almost control what you think and believe in. When I am on social media, I usually will not search for information rather just read what is there when I open it. If I just believe and trust the first few things I read as facts, these algorithms are manipulating me into a certain way of thinking.

For people unaware that this is happening, which “sixty-two percent of undergraduates were not aware that Facebook curated users’ News Feeds by algorithm—much less the way in which the algorithm works” (Tufekci, 2015) poses a problem. If people are unaware their news is being filtered for them, they may just believe the first few things they come across. “In addition, the Facebook study showed that Facebook is able to induce mood changes” (Tufekci, 2015), this can pose a problem if used in an incorrect way. If the algorithms are always aimed in the right direction this could be a great way to benefit people but due to the complexity and constant changes of algorithms this is hard to control. “In another example, researchers were able to identify people with a high likelihood of lapsing into depression before the onset of their clinical symptoms” (Tufekci, 2015) by being able to identify and help in these ways it could really benefit people. Unfortunately, I do not believe this is how algorithmic gatekeeping will be used in the future.

My beliefs are backed by the examples of Ferguson, the 2010 election, and a hiring algorithm. By the algorithm deciding the Ferguson story was not “relevant” enough to bring to more people’s attention sooner, it was almost as if it had its own agenda or reasoning to hide this story. The entire country was talking about this but yet it still wasn’t “relevant” enough. For the 2010 election, which millions of people were experimented on without their knowledge, showed a possibility in swinging votes one way or another. If a hiring algorithm can possibly discriminate based on race or belief, you must ask yourself what else can a human make an algorithm do and what can an algorithm make a human do?

With the use of algorithmic gatekeeping, my campaign of structurally deficient bridges could be shown on news feeds of people who were never looking for it. People who have liked certain articles pertaining somewhat to this topic could have my campaign pop up for that person to read. I believe gatekeeping would restrict my campaign more than help. I don’t think there are many people who are interested in this topic, making it more difficult for my campaign to be filtered into many people’s timelines.

Surveillance is privacy. Algorisms know best. Facebook is Friend.

Surveillance is privacy. Algorisms know best. Facebook is Friend.

It’s hard to tell which dystopian novel we’ve wondered into recently.  We’re somewhere between Orwell’s 1984 (Big Brother could be the NSA or Facebook or Google?), Atwood’s A Handmaid’s Tale (so long reproductive rights!), a dash of Veronica Roth’s Divergent (Factions divided sounds like a familiar narrative), and hopefully not quite at Hunger Games levels.

Storytelling is as historically old as hominins. Constructing our own narratives is a biological drive according to Darwin’s theory of sexual selection (as opposed to his idea of natural selection). Facebook gives us a prefect platform to shout our stories from the proverbial mountaintop. “Here I am! Pay attention to me! What I’m saying is unique and important!” Then we sit back and count the likes, the loves, the smiley faces, the validation that someone cares.

Facebook is designed with this need in mind. Every week a new update rules out that knows us better, is able to predict what we want, shows us certain friends and not others, and caters ads to us specifically. This is what “algorithmic gatekeeping” means. Facebook is the lens in which we view our social media lives, we are subject to Facebook bias, and we are blind to Facebook’s control. We may be the head, but Facebook is the neck.

The author explained the concept of gatekeeping in the following excerpt: “In this analogy, your phone would algorithmically manipulate who you heard from, which sentences you heard, and in what order you heard them—keeping you on the phone longer, and thus successfully serving you more ads”. Facebook’s algorithms show you what they want you to see for whatever purpose they want to and you have no way of knowing the degree of this manipulation.

The internet is the last frontier, the modern Wild, Wild West. Laws, ethics, and oversight are playing catchup to the climate of social media. Events like “gamergate” in which “trolls” threatened self-identified female gamers, incidents of revenge porn and general bullying, hate speech and culturally insensitive remarks have received public backlash with critics called for social media companies to better monitor the content that users share. The question remains, to what degree should a social media company censor content or users? Public backlash has lead to debates about how much social media and the consequences of a person’s actions on those sites should bleed over into a person’s “real life”. Social media lifts up idols to watch them crash. Last election, Ken Bone, a bespeckled man in red sweater has hailed a hero, only to have internet history dug up and the public tide to turn on discovery that he was not as innocent as his visage alluded.

Facebook seems to be following the “don’t ask for permission, ask for forgiveness later” mantra. According to the article, Facebook revealed after the 2010 U.S. election that it ‘promoted’ voting by ‘nudging’ people to vote by suggesting that their friends had also voted. Encouraging civic engagement is fairly innocuous. Hiding events like the Ferguson protests could be harmful and suggests that Facebook algorithms suffer from the same racial bias as its creators.

For any group creating campaign pieces for social justice issues, especially if those issues are intersectional with structural racism and classism, it will be important to remember the example of Ferguson. Facebook’s algorithm will likely filter out your content. At this time, it is hard to determine how many people will see your content. Put a cheery spin on a social justice issue and you might just sneak past the algorithm.

Until Facebook is held to some measure of accountability and transparency by an oversight organization (possiblly the FCC), we will be ruled by the almighty algorithm.

There is still hope. The algorithm isn’t perfect. An artist’s rendition of a robin on a Christmas card was recently flagged as “indecent”. The controversy has now given her more coverage than her post would have originally received. You can decide for yourself here:

https://www.theguardian.com/technology/2017/nov/12/artists-sexual-robin-redbreast-christmas-cards-banned-by-facebook?CMP=fb_gu

For additional information on our evolutionary history with storytelling, go here:

https://www.theatlantic.com/entertainment/archive/2013/09/the-evolutionary-case-for-great-fiction/279311/

For more information on how social media and smartphones are turning you into a validation zombie:

https://www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/

For an more about racial bias in algorithms:

https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

Algorithmic Gatekeeping and Advertising on Facebook

Based on Tufekci’s description, I perceived “algorithmic gatekeeping” as the way some online media sources determine who sees what, when they see it, and how they see it.  When hearing of this term, I immediately thought of my own experiences with these sort of instances.  For example, when looking for flights for an upcoming trip, I explored routes through different airlines in order to find one of reasonable duration and price.  It is important to note that I only searched for this on my computer.  However, when scrolling through Facebook on my phone, I came across multiple advertisements from a certain airline concerning the same trip I was searching for on my laptop.  This could certainly not have been just a coincidence.  Now, whether this is illegal, intrusive data mining, or simply smart marketing strategy is subjective.  This seems somewhat similar to what Target was doing to the father’s daughter in the story told by Tufekci.

Now, although advertising is not the only use of “algorithmic gatekeeping,” it is considerably noteworthy.  I feel as if it could certainly be important when promoting an event online.  On Facebook, a user can “sponsor” an item to show up on people’s news feeds with similar interest, proximity to the creator, or other factors.  In my group’s campaign, creating a sponsored post would be beneficial to promoting our event in which a presentation will be held.  However, it would be important to take into account Tufekci’s term of algorithmic gate in order to determine who exactly the post would reach.  This would be essential in gaining the right attention and crowd for our event.  

Going back to the Target example the author used, it would be necessary to make sure that the event goers were mostly commuters, or simply people that travel across one of Pittsburgh’s three rivers almost every day (as our campaign is concerning the poor structure of Pittsburgh’s bridges).   This would mean that the people that are most likely concerned or affected by the goals of our campaign our getting reached.  It would not be efficient to be advertising to, for example, people who live on the South Side and also work on the South Side.

While some of the factors that determine who see the hypothetical sponsored Facebook post are controllable by the creator, others are out of the hands of the social media users.  Facebook has its own interests, as noted by Tufekci.  Due to the fact that Facebook conducts experiments to determine how it can affect decisions users are making, the post may not reach who we want it to.  The algorithmic gatekeeping could certainly prevent the effective use of social media advertisements to advance our campaign.

Of course, it is, for the most part, impossible to determine what exactly Facebook’s (or any other online entity) actions are in terms of algorithmic gatekeeping.  This would make it difficult to determine whether or not creating something such as a sponsored event would be worthwhile.  However, it is possible to look into tendencies that are indicative of how Facebook has used its algorithms in the past.  While not fully effective, researching this could be practical in deciding how or if to post a certain item online.  

Algorithmic Gatekeeping

The first thing I thought while reading Tufekci’s piece on algorithms was: “Wow, I need this when I’m writing my newsletter (first campaign piece).” Although it is pointed out that there is error in algorithms (not unlike human error), I believe that a piece like a newsletter or mass email would benefit in a huge way from some type of computer-generated algorithm. Thinking about my own personal experiences, the number of e-newsletters or articles I receive from organizations or businesses that I subscribe to does not necessarily make me want to stay updated with them, but the ones that I am interested in I always read, and often do something that leads to them profiting because I have done it.

With a school newsletter, I do not really think that any sort of algorithmic system would add anything. Most likely, people who are reading a school newsletter have children in the school or have some other personal stake in the school’s achievements; the people who are receiving it actually want to read it. However, with a newsletter from a company or business like a non-profit who is trying to get word out about their organization could definitely benefit from computed algorithms, since they would be able to focus in on the audience that is going to be the most responsive and therefore provide some sort of benefit to their cause.

When Tufekci uses the term “algorithmic gatekeeping” he is alluding to the idea that these computer-generated algorithms are able to keep certain topics, people, and opinions out, and let others in, therefore subtly shaping simple, everyday decisions in the average social media user’s life. When writing for the public, this could be a game-changer. If you were able to correctly identify what your audience’s views on any given subjects would be, you would be able to tailor the views presented (and the way they were presented) almost perfectly.

Algorithmic gatekeeping, to me, is something that becomes slightly too intelligent when it comes to addressing the public. Although I don’t feel personally endangered by Facebook using my likes and interests to better tailor my newsfeed to me, I don’t really like the idea of it being all run by a computer program that most humans don’t understand. This might be completely wrong, but I believe that the error that comes with having an essentially unpredictable audience when writing for the public is what makes that writing all the more persuasive and comprehensible. With the use of algorithms, the gap for changing peoples’ minds becomes much smaller and therefore prevents the potential shift in opinion for any given individual who makes up the much larger audience that the writer or author might have in mind. Without this room for swaying previous attitudes towards a subject, I think that the author or creator of public writing loses an important part of their initial attempt to produce information in a persuasive way.

When Tufekci explains the example of being able to tell if an individual will vote Democrat or Republican by looking at their likes, interests, activities, and friends on Facebook, I imagine some sort of futuristic scanning of the average human brain, and I feel betrayed, violated, and a little bit scared. He explains, “However, Marketers, political strategists, and similar actors have always engaged in such ‘guessing.’” While I understand this is true, I stand by my point that writing for the public must include some sort of guessing and anticipation for the audience. Although this may be in the future for people who are writing op-eds and school newsletters, I do not believe it needs to be used in everyday life.

 

The Concern and Alarm of Algorithms Online

A gatekeeper, by common definition, is someone who acts as a locked doorway (or gate) between two different parties – typically between a higher source (like a policymaker) and a lower source (like the public), in order to control how much is and isn’t communicated between the two. In Tufekci’s concept of “algorithmic gatekeeping”, he is referencing the process of there being a non-human (algorithm) designed to act as gatekeeper from over the internet. In this form, the gatekeeper is code developed to either display or withhold certain things (like articles or posts) from an individual’s news feed (in terms of Facebook), for whatever reason. The reason, in this sense that is decided by whoever controls the algorithm, is the agency of that person. The agency is similar to a goal, but in terms of algorithms online it may be better to consider it as an influence on your actions based off of your prior decisions/interactions.

The ideas represented in this article about algorithms online that influence our actions are brought about from a public distress of a Facebook study done to test individuals’ interactions based on algorithmic planning. With this, it is important as a public writer to understand that, depending on the agency of the source and who you are trying to reach, publishing online may or may not reach as many as intended or for the purposes it was intended. A huge piece that stuck out to me from this article was about the concern of using algorithms in political elections, and this being written in 2015 about a 2014 issue is incredible considering it then actually happened in 2016. With the presidential elections in 2016, it was revealed that Donald Trump used a private company in England to help him with Facebook algorithms that would display more positive information about himself or less positive information about his opponent in order to sway more voters.

Thinking about things like algorithms for my campaign pieces, I think, is a little unlikely. The only one that I think would be affected online is the press release that I wrote because this would be distributed online nowadays to reach as many people as possible. Though, I would have less concern with this piece since it is only on a local level, there is still the possibility that it could be filtered away from certain individuals’ attention due to the agency of an algorithm if it were against what some company may be trying to accomplish. Typically, those with the most money would hold the most power to create an algorithm to block or redirect people from certain information. Unless my piece were directly opposing some higher agenda of a well-off organization, it seems unlikely that a piece about an upcoming charity event would be affected.

However, it is of more importance to individuals trying to make a difference with less power in the world, I think. If you don’t have the money, you can’t properly reach everyone you may want to. Because of this, it is the individuals with the most noble concerns (about the environment, health, social justice, etc.) that seem to be filtered out by bigger organizations that would be adversely affected by whatever they may try to warn people about.

Good Concepts Need Effective Execution

Delivery and Consideration of Recomposition of material are very important concepts in the digital age. Writers must not only consider the people that their original work will reach and potential secondary audiences, but also the audiences of any reproduction of the piece. Since we live in an age where “users take culture, remix culture, rewrite culture, and thus make culture”, a writer needs to take a “strategic approach to composing for rhetorical delivery” also known as rhetorical velocity. It is important to think about the best ways to communicate the original message, why and who would remix it, and how they would remix it based on the original work.

Good messages are less likely to be heard and spread if the delivery is not executed effectively. Delivery must be viewed on a case by case basis, based upon the mode of communication being used and the message itself. For example it is often necessary on social media or any digital media to keep messages short and attention-grabbing so viewers pay attention to them and do not lose interest too quickly.

One of the main aspects of this article that stood out to me as far as recomposition of information was the discussion of how information is “remixed” through sites like Wikipedia. Throughout my entire academic career I was told not to use sites like Wikipedia for research projects since articles can be edited by the public, therefore making its content potentially untrustworthy. I remember this always used to annoy me because there always seems to be a Wikipedia article for everything. Although I like to believe that most people who would sit down and take the time to add information to a very specific Wikipedia article have solely the intent to assist anyone looking for information, it is important to consider that that any recomposition of information, especially on a page open for anyone to edit, may not be great as a main source of information. I do believe that Wikipedia and similar sites do provide a good basis for anyone new to a topic to begin their research.

In a campaign where our main intent is to spread awareness on our topic, the idea of delivery and how the information we provide will be recomposed must be taken into consideration. A major focus of this paper was how recomposition has taken on a new role in the digital age. People are constantly sharing information which is “much more readily available to mix, mash, and merge.”  Our main issue in our campaign thus far has been how people perceive what our intent is. Throughout and peer review sessions of our campaign pieces or proposal there has been confusion regarding whether we are directly trying to raise money for the water crisis or whether we are concerned solely with educating people on the issue. It is essential that we are clear and consistent about our mission throughout our campaign pieces so that our original message is not lost in translation through any theoretical recompositions by third parties.

One passage that stood out to me most in relation to our campaign was “The medium, or process, of our time—electric technology—is reshaping and restructuring patterns of social interdependence and every aspect of our personal life… Societies have always been shaped more by the nature of the media by which men communicate than by the content of the communication.” Since my first campaign piece is a Facebook page and my second one is a website, I will need to be aware of potential recompositions throughout the revising and creation of these pieces since I am using mediums that are so essential to modern communication.

Rhetorical Velocity: Redo, Reuse, Recycle

“Rhetorical velocity” is a nonsensical term that means ‘the theory of how rhetoric/art/object are remixed, reconstituted, changed in physical and digital spaces over time and distance’. Reading about this concept made me think about memes. Memes are spread quickly, build meaning off of each successive variation and are themselves creates by “cutting and pasting” elements of rhetoric remixed with culture. While a meme itself is usually created to be repurposed and circulated, the original rhetoric may have not been made with those intensions. One meme that has been popular is “Brace Yourself Winter is Coming”. According to “Know Your Meme”, a website that tracks the history and usage of memes, “Brace Yourself Winter is Coming” originated from Game of Thrones from Ned Stark’s family saying, “Winter is Coming”. The image is a screenshot from a scene in the show in which Sean Bean is holding a sword. It is usually changed to “Brace Yourself X is coming” and commonly used for something cyclical in nature going to occur again, such as “Brace Yourself, Pumpkin Spice Season is Coming”, but can also be used in derivative ways. Due to the show’s popularity and easily recognizable, somewhat catchy quotes, and the meme’s ease of mutability, this meme has been popular for a number of years. Memes are built from popular or recognizable objects and transformed into “in-jokes”, social commentary, building relationships, etc. Memes are quick to share, easy to digest and don’t require the attention something like an article would need.

While “Brace Yourself Winter is Coming” has had mostly neutral usage, one meme made national headlines for it’s use by “Alt-right” media and has often been pegged as a racist symbol, Pepe the frog. The creator denounced it’s use and even due a funeral cartoon to “kill” the character. I’ll attach an opinion piece of the use of Pepe and the changing nature of memes at the end of this comment.

I think memes are a useful way to think about “rhetorical velocity” but they are certainly not the only object that is similar. In my poetry classes, we were often assigned to write a poem in response to a painting, sculpture, etc. The poem itself had to stand on it’s own, without relying on it’s inspiration for meaning and context. I think this assignment is a good example of “rhetorical velocity” in action. We remixed an emotive response into a different artwork. The audience was small because the class was small, but shared on a different media, it could have reached a larger audience.

Audience matters when we discuss “rhetorical velocity”. Medium also matters. In the age of the internet, asking the right question opens the right doors. Sometimes a turn of phrase can yield vastly different results on a Google search. I think that one of the lessons that we can glean from this reading is that information on the internet often can take on a life of it’s own.

The materials that we produce from this class, may be found be other individuals or groups. We can harken back to our perusal of the NASA website to find lessons about how information gets re-used, re-mixed and re-package. We theorized that NASA produced the climate change page as a resource for teachers, or students, or curious minds. NASA has a widely-known, reputable name, so the information on their website is assumed by the audience to be scientific, well researched, fact-checked and easy accessible for the audience to digest and repurpose the information.

The documents that we produce for this class do not benefit from name recognition and threrefore, must rely on the audience’s perception of the material. Did we use credible sources to establish ethos? Is our language biased? Is the point of our material coming across to the audience? Does the document have easily accessible “facts” that can be pulled out by an audience member?

I pulled the following passages from our reading, because I believe they hold some “bigger picture” value:

“The medium, or process, of our time—electric technology—is reshaping and restructuring patterns of social interdependence and every aspect of our personal life… Societies have always been shaped more by the nature of the media by which men communicate than by the content of the communication”

 

“delivery can no longer be thought of simply as a technical aspect of public discourse. It must be seen also as ethical and political—a democratic aspiration to devise delivery systems that circulate ideas, information, opinions and knowledge and thereby expand the public forums in which people deliberate on the issues of the day”

 

These remind me of social media, particularly of Facebook, and the importance of ‘sharing’ content in a responsible manner. Facebook has evolved from simply a way to connect with friends to the newest iteration, a more news-focused social sharing site. If we share our class materials, we have the responsibility to check our content for culturally insensitive material or language, be inclusive of people directly impacted by our social justice issues and to accurately portray the issue without relying on stereotypes or misinformation. It is our responsibility to consume media that also meets those standards. With the incessant, addictive nature of social media, it is easy to share content, react to the content produced by others and consume content without digesting it or contextualizing it.

Social media content is important, but our social media use is also important as well. Smart phones bring us immediate validation, every ‘ping’ of a notification makes us more reliant on the next notification. We rely on technology in every aspect of our days and it becomes a funnel for how we consume new information. While we must be cognizant of what we share on social media, we must also be aware of how much we rely on it to find new information.

The content we produce may have a life of it’s own on the internet, or it may never see the light of day. We have a responsibility to produce quality documents and share them wisely. As we’ve seen with memes, we don’t always know where our content will go.

 

https://www.nytimes.com/roomfordebate/2016/10/03/can-a-meme-be-a-hate-symbol-6/internet-memes-are-value-neutral-but-do-reflect-cultural-moments

Rhetorical Velocity: Redo, Reuse, Recycle

“Rhetorical velocity” is a nonsensical term that means ‘the theory of how rhetoric/art/object are remixed, reconstituted, changed in physical and digital spaces over time and distance’. Reading about this concept made me think about memes. Memes are spread quickly, build meaning off of each successive variation and are themselves creates by “cutting and pasting” elements of rhetoric remixed with culture. While a meme itself is usually created to be repurposed and circulated, the original rhetoric may have not been made with those intensions. One meme that has been popular is “Brace Yourself Winter is Coming”. According to “Know Your Meme”, a website that tracks the history and usage of memes, “Brace Yourself Winter is Coming” originated from Game of Thrones from Ned Stark’s family saying, “Winter is Coming”. The image is a screenshot from a scene in the show in which Sean Bean is holding a sword. It is usually changed to “Brace Yourself X is coming” and commonly used for something cyclical in nature going to occur again, such as “Brace Yourself, Pumpkin Spice Season is Coming”, but can also be used in derivative ways. Due to the show’s popularity and easily recognizable, somewhat catchy quotes, and the meme’s ease of mutability, this meme has been popular for a number of years. Memes are built from popular or recognizable objects and transformed into “in-jokes”, social commentary, building relationships, etc. Memes are quick to share, easy to digest and don’t require the attention something like an article would need.

While “Brace Yourself Winter is Coming” has had mostly neutral usage, one meme made national headlines for it’s use by “Alt-right” media and has often been pegged as a racist symbol, Pepe the frog. The creator denounced it’s use and even due a funeral cartoon to “kill” the character. I’ll attach an opinion piece of the use of Pepe and the changing nature of memes at the end of this comment.

I think memes are a useful way to think about “rhetorical velocity” but they are certainly not the only object that is similar. In my poetry classes, we were often assigned to write a poem in response to a painting, sculpture, etc. The poem itself had to stand on it’s own, without relying on it’s inspiration for meaning and context. I think this assignment is a good example of “rhetorical velocity” in action. We remixed an emotive response into a different artwork. The audience was small because the class was small, but shared on a different media, it could have reached a larger audience.

Audience matters when we discuss “rhetorical velocity”. Medium also matters. In the age of the internet, asking the right question opens the right doors. Sometimes a turn of phrase can yield vastly different results on a Google search. I think that one of the lessons that we can glean from this reading is that information on the internet often can take on a life of it’s own.

The materials that we produce from this class, may be found be other individuals or groups. We can harken back to our perusal of the NASA website to find lessons about how information gets re-used, re-mixed and re-package. We theorized that NASA produced the climate change page as a resource for teachers, or students, or curious minds. NASA has a widely-known, reputable name, so the information on their website is assumed by the audience to be scientific, well researched, fact-checked and easy accessible for the audience to digest and repurpose the information.

The documents that we produce for this class do not benefit from name recognition and threrefore, must rely on the audience’s perception of the material. Did we use credible sources to establish ethos? Is our language biased? Is the point of our material coming across to the audience? Does the document have easily accessible “facts” that can be pulled out by an audience member?

I pulled the following passages from our reading, because I believe they hold some “bigger picture” value:

“The medium, or process, of our time—electric technology—is reshaping and restructuring patterns of social interdependence and every aspect of our personal life… Societies have always been shaped more by the nature of the media by which men communicate than by the content of the communication”

 

“delivery can no longer be thought of simply as a technical aspect of public discourse. It must be seen also as ethical and political—a democratic aspiration to devise delivery systems that circulate ideas, information, opinions and knowledge and thereby expand the public forums in which people deliberate on the issues of the day”

 

These remind me of social media, particularly of Facebook, and the importance of ‘sharing’ content in a responsible manner. Facebook has evolved from simply a way to connect with friends to the newest iteration, a more news-focused social sharing site. If we share our class materials, we have the responsibility to check our content for culturally insensitive material or language, be inclusive of people directly impacted by our social justice issues and to accurately portray the issue without relying on stereotypes or misinformation. It is our responsibility to consume media that also meets those standards. With the incessant, addictive nature of social media, it is easy to share content, react to the content produced by others and consume content without digesting it or contextualizing it.

Social media content is important, but our social media use is also important as well. Smart phones bring us immediate validation, every ‘ping’ of a notification makes us more reliant on the next notification. We rely on technology in every aspect of our days and it becomes a funnel for how we consume new information. While we must be cognizant of what we share on social media, we must also be aware of how much we rely on it to find new information.

The content we produce may have a life of it’s own on the internet, or it may never see the light of day. We have a responsibility to produce quality documents and share them wisely. As we’ve seen with memes, we don’t always know where our content will go.

 

https://www.nytimes.com/roomfordebate/2016/10/03/can-a-meme-be-a-hate-symbol-6/internet-memes-are-value-neutral-but-do-reflect-cultural-moments

“High Velocity Travel and Safe Delivery”

When reading “Composing for Recomposition: Rhetorical Velocity and Delivery” It made me realize my work could eventually be used by another person to further the work I have started with our campaign.  “Remixing—or the process of taking old pieces of text, images, sounds, and video and stitching them together to form a new product—is how individual writers and communities build common values” Ridolfo and DeVoss (2009) this quote in particular made me feel this way.  I started this campaign about Pittsburgh’s structurally deficient bridges because of information I have read about from other writers.  For my campaign, it is a mixture of different writers’ work compiled with my own information and organized in a way I believe can intrigue more writers to write about this topic.

Delivery is one of the most important aspects of giving a presentation or giving a speech to a group of people.  I am sure just like me, you have heard a presentation where the speaker was either not loud enough, or spoke in a very monotone voice throughout.  These make a presentation very hard to pay attention to regardless of the information they are presenting to you.  Aristotle said “These are the three things—volume of sound, modulation of pitch, and rhythm—that a speaker bears in mind.” Ridolfo and DeVoss (2009).

An issue our campaign can see is time and place of our audience.  While we are trying to raise awareness to the general public about the Pittsburgh bridge issue, it will be changing over time.  “It can no longer be assumed, even in a contemporary instance of oral delivery, that the time, place, and medium of delivery will necessarily be the same for both the speaker and the speaker’s audiences.” Ridolfo and DeVoss (2009).  This quote from Ridolfo and DeVoss is very accurate for our situation because more information and situations are happening throughout the year on these brides.  For now we are targeting the general public to raise awareness, but in the future our campaign could play a role in presenting to legislatures to gain funding.

When creating this campaign, we are constantly thinking of what this could turn into in the future.  By writing a news article, we are making it easy to access and pull information quickly for a third party to continue on and push agenda farther.  By being able to access our information quick and easily, it gives more opportunity for another person to run with our information.

Amplification: The Bigger the Better

The overall goal of my group’s campaign is for the city of Pittsburgh to receive adequate funding to be able to improve the infrastructure of the many structurally deficient bridges in the area.  In order to do so, two main audiences need to be addressed: government officials and citizens/commuters of the area.  Because of this, our campaign pieces need to be detailed and fact-based so that readers and/or listeners can become passionate and concerned about the problem, and possibly present the case to city officials.  However, they must also be worded simply enough so that information can be spread easily to other ordinary citizens and commuters.  Without widespread knowledge about the infrastructure problem, our campaign will never reach its goal.

 

Ridolfo and DeVoss’ section on amplification contains valuable information in regards to spreading information and reaching wider audiences.  The use of the internet will be crucial in our campaign plan, and as the two authors point out, the concept of amplification through the internet is relatively simple and can be incredibly effective.  They describe the use of “attack videos” in their piece:

 

“In the case of short attack videos, only the footage of the actual attack need come from Iraq. Once an affiliated individual has received that footage and basic accompanying information, which can be transferred over the Internet or by mobile phone, he has only to add the insurgent group’s logo, a short title sequence, and perhaps a soundtrack with a motivational song. He then uploads the resulting video product to a free upload-download site and posts an announcement to a forum. The video-editing software required to produce such a video is cheap and readily available. (p. 35)”

 

Reading this passage actually changed my outlook on my second campaign piece.  While I originally thought that a brochure describing the bridge problem and what needed to be done would suffice, I now realize that the internet should be the main concern.  As Ridolfo and Devoss show in their work, the audience of a public piece can exponentially increase through the use of the internet.  Now, for our campaign, the problem is figuring out what type of internet piece will be most effective and easy to be shared.

 

In my experience, I believe people become incredibly passionate when their hometown or home area is being affected.  On social media, there are often passionate articles and videos that are posted, and then are repeatedly shared.  It is certainly possible that a dramatic video, with pictures of the deficient bridges and views of the city would be most effective in reaching my group’s desired audience.  Viewers will then most likely share the video, and most likely add their own sentence or two explaining their receptions and/or opinions about the problem.  In extreme cases, viewers might create their own video showing their reaction to the original video.  These reaction videos have become more popular over the years, especially on YouTube.

 

As the authors state, “Rhetorical velocity is, simply put, a strategic           approach to composing for rhetorical delivery.”  When composing my second campaign piece, it is important that I take into consideration how it will be perceived and altered by my directed audience.  In doing so, I can determine how the information I present can be spread even further, thus increasing concern about and interest of the campaign.