Monthly Archives: April 2013

HW#3 Your Filter Bubble

In chapter 8 of The Filter Bubble, Eli Pariser outlines a few ideas that may help mitigate the effects of personalization. He explains the actions individual users can take as well as what companies, the government, and individuals as citizens can do to combat the rise of a filter bubble. While many of the ideas presented by Pariser to lessen the propagation of a filter bubble are respectable, some are better and more realistic than others.

“Stop being a mouse.”(223) is probably the best, simplest, and hardest idea offered by Pariser to put into action.  This runs off the assumption that we are creatures of habit.  That “we all kinda do the same thing over and over again most of the time. And jumping out of that recursion loop is not easy to do.”(223)  Pariser admits that even he is “pretty mouselike”(223) in his information habits. It’s hard to break habits and routines since we like the comfort and ease that comes from familiarity. By actively diversifying how and what you spend time doing on the internet, you make it harder for the algorithms to “pigeonhole” your profile. This may be the best method for offsetting the effects of personalization but it’s not very realistic. Generally people use the internet sparingly to catch up on things that are most important to them. Most people will not spend their time reading or searching for a topic that they’re sort of but not really interested in even if they find it important. This is only possible if you make a conscious choice to be critical, inquisitive, and to not be afraid of feeling uncomfortable about what you read or see. This is not only good for deflecting the negative consequences of personalization but it’s also a good way of becoming a better-rounded person.

One of the weaker ideas I felt presented by Pariser were the “fully algorithmic solutions.” This takes everyone’s opinion of what they believe is important and should be seen into account. Even though I like the idea of bringing personalization to the public eye and putting it in the user’s hands, I can’t help but think you’re just creating your own filter bubble. This doesn’t leave much room for you to be exposed to different things. I feel that most people don’t know what they want but rather what they should want. I also feel that based on the way our media works and how we consume it, the general populous is not equipped with the right set of skills to discern important news from those that aren’t. I can only see problems with an “important” button as Pariser mentions on page 235. It reminds me of the Kony 2012 campaign which was essentially a viral video that over sensationalized the severity and importance of a relatively old war criminal.

The most realistic idea presented by Pariser was the one where “the engineers of the filter bubble…can solve for serendipity, by designing filtering systems to expose people to topics outside their normal experience.”(235) I was very fond of this idea because it integrates exposure into your everyday life with very little effort from the user. There’s a service called StumbleUpon, which in its early stages did exactly this. You would click a button and a random web page would appear. Though recently they have adopted the algorithmic method to determine which websites you are exposed to that is probably based on clicks, your own predefined interests, how long each user stays on a certain page, as well as user ratings of websites; which is to be expected since it is a business.

If you want things to change you should look towards yourself first and ask whether or not you’re living that change. As Mahatma Gandhi put it “Be the change you want to see in the world.” Ultimately the extent of your filter bubble is decided by you, online or off.

Chapter 8 Ideas. HW

HW #3

Strongest / Weakest Ideas

In this modern information age, relying on technology has become a second nature to most of us. Leaving our houses with a sense of security in having Google maps on our phones, constantly having phone cameras ready to capture fascinating sights, Facebook and Twitter apps always prepared for a new status update from us and many more similar actions are now a part of our everyday life. As the time goes on and reliance on the internet gains more and more importance, we can’t help but neglect its’ main problem: the filtered search results falsely identifying our personalities and creating inaccurate snapshots of who we are and what we want. In the final chapter of The Filter Bubble, Eli Pariser summarizes the problems with privacy on the internet and discusses several solutions and actions people can take to avoid becoming “bubbled in.”

The strongest point Pariser made about fighting the filtering process was when he addressed corporations urging engineers to “solve for serendipity by designing filtering systems.” I completely agree with this stand because users of the internet are simply consumers, they purchase the product and take it as it is, reaping the benefits of what it offers. Users can’t necessarily identify the hidden issues within the product and consequently definitely cannot combat them. In the case of search engines specifically, masterminds behind the algorithms have the power to alter the way we search by blurring the lines of this “bubble” we enter when viewing personalized results. Pariser admits that having less personalization in our results might decrease the popularity of search engines because “personalization system with an element of randomness will (by definition) get fewer clicks.” As this may be true, times are constantly changing and what we rely on today may be replaced by the newest trend in researching and technology. Currently, the problems with filtered results are less known and don’t yet concern many users, but as Pariser explained with an example of increased attractiveness of newspapers, the way people search is bound to change. In my opinion, this idea is particularly strong because it is human nature to follow trends and if corporations and engineers make alterations to researching processes, the public is bound to follow and enjoy search results and advertisements with much more diversity.

The weakest point made by Pariser has to do with personally breaking out of our habits. As previously mentioned, I believe it is human nature to follow trends. There are top engineers and leaders in IT for different corporations that hold the key to less personalization and decreased bias in what we view on the web. Consumers alone have minimal power over broadening their interests when they are constantly pressured with what the web believes they want to see. Pariser states that “just by stretching your interests in new directions, you give the personalizing code more breadth to work with.” I completely agree with this statement; however it needs to be taken into consideration that habits aren’t easily broken and we can’t control ourselves as effortlessly. The search results we use and the ads we view aren’t there by choice and searching for multiple topics just to cause a sort of “confusion” to the algorithm is almost impossible. For my part, I have experienced personalized ads based on a shoe shopping website I visited a very long time ago. Although, I have been to many other websites since then, ads based on that one specific search continue showing up on the sides of my browser and there is no way for me to control that aspect of my internet. Pariser does suggest regularly deleting cookies, but I believe there needs to be a much more concrete solution to this filtering problem than such a simple action, which only takes care of a small part of this huge problem.

HW#3 – Outsmarting The Personalization Algorithms

In chapter 8 of The Filter Bubble by Eli Pariser, the author provides suggestions about ways we can address the problems of the filter bubble that widely exists today in our daily lives. Eli Pariser explains to us that there are three primary categories for which we can respond to the effects of the filter bubble. The first are actions from companies, the second are actions from the government, and the third are actions from individuals.

Actions from companies in theory, is the best solution for addressing the concerns relating to personalization and privacy. If companies were transparent in their motives and they admit the use of personalization then users would be knowledgeable about the effects from using search engine X (Google) vs. search engine Y (DuckDuckGo). In the latter the search engine provider clearly and explicitly states that they do not track your behavior and they do not filter your searches whereas in the former there is no mentioning of such details that is clearly visible to the user and by clearly I am referring to not having to go through heaven and earth to be able to find the disclaimer notice. “A visitor to a personalized news site could be given the option of seeing how many other visitors were seeing which articles…of course, this requires admitting to the user that personalization is happening in the first place, and there are strong reasons in some cases for businesses not to do so. But they’re mostly commercial reasons, not ethical ones” (Pariser 232). So in a perfect world, if companies were transparent in their behavior, then the users of their services will have, at the least know what other users are engaging in and what sources others have visited. However, the world is not perfect and companies need to act in regards to their best interests and in the interests of their stakeholders, hence, foregoing the interests or their customers (the users).

Actions from government are the weakest form for addressing the issues relating to personalization and privacy. I am currently registered on the do not call list and have been for the past four years or so, however, in the past three years I have been bombarded with countless calls from telemarketers and surveyors. One might ask if the government is really monitoring these lists or have they just given up and have moved onto more important matters? In my opinion this is why I believe the Do Not Track list is the weakest method in addressing the problems of privacy and personalization. “But Do Not Track would probably offer a binary choice – either you’re in or you’re out – and services that make money on tracking might simply disable themselves for Do Not Track list members…And as a result, the process could backfire – “proving” that people don’t care about tracking, when in fact what most of us want is more nuanced ways of asserting control” (Pariser 238). If you use Facebook daily, and fall into the category of being a mouse (visiting the same site daily for an X amount times) and have opted in for the Do Not Track list, the worst of the worst may occur; you may be blocked from using Facebook. Who are we to say that they cannot ban/lock/disable your account for not “complying” with their user policy agreement? And as Pariser has said “most companies reserve the right to change the rules of the game at any time” (Pariser 239). Facebook could have the right to disable your account due to noncompliance with their policies. Let’s also not forget that if users are exhibiting the mouse like behavior that Pariser outlines in the chapter then it is most probable that the users in our previous Facebook example will continue to use Facebook vs. opting in the Do Not Track list. If users need their daily digest of their Facebook news feed, they will most likely continue to use it even if it means having Facebook tracking your every move. Think of smoking for a moment; people do it even though they know it is bad for their health, yet why do people continue to smoke? Because it is ADDICITNG – This is exactly the same reason for why as much as we love using Facebook and Google, we will continue to use their services even though they are tracking us because without it, we would be essentially going through “withdrawal.” Just like with the smoking example, in essence we are sacrificing our health (our privacy) for a moment of satisfaction and pleasure –using Facebook, Google, etc.

Now the only solution left for defending ourselves against the effects of the filter bubble are actions from individuals. “In courts around the world, information brokers are pushing this view – everyone’s better off if your online life is owned by us” (Pariser 239). This may be the most frightening comment to be posed to the users of the internet. It is as though they are saying that the cyber world (the internet) should be governed by a dictator or an oligarchy. The internet is supposed to be a leveled playing ground for all users; a world where the users are granted certain rights (rights to privacy). But again this is not a perfect world, and users continue to make the same mistakes of not knowing what information is being collected from us before it is too late. We have to ask ourselves – Who knows myself more than me? – No one, and this is why in order for us to be able defend ourselves against the filter bubble, we need to take actions into our own hands and not leave it to the government and companies because if users truly want control over their activity on the internet, then the users should be responsible for such control.

It is my strong belief that to be educated and knowledgeable about the filter bubble and the existence of such privacy and personalization concerns is one of the most important factors for being able to address these issues raised. And this is why I believe that the strongest idea for addressing the problems of the filter bubble is to develop Algorithmic Literacy. “It doesn’t take long to become literate enough to understand what most basic bits of code are doing” (Pariser 229). If Facebook tells us transparently that they are collecting our information for reasons x and y, it still doesn’t tell us how and what type of data is being collected. The most effective way for users to be able to understand how Google filters our search results or how Facebook filters our news feed is to understand the fundamentals and the basics of how their algorithm works. The personalization algorithms that companies use are all very similar in ways in which the server (the host) sends information from the users into a database (where all your data is stored) based off set patterns of behavior. We can begin to understand how such algorithm is formulated by first knowing the language of coding.

By understanding how coding works, and what a certain line of code does, it ultimately gives you the behind the scenes all access pass into how a company’s personalization algorithm works. “We need to recognize that societal values about justice, freedom, and opportunity are embedded in how code is written and what it solves for. Once we understand that, we can begin to figure out which variables we care about and imagine how we might solve for something different” (Pariser 227). If everyone knew coding then we could in reality alter our behavior to counteract the existing code that is already in place. To put it simply, if Google’s personalization algorithm does X, then we the users should do Y in order to prevent us from falling into their personalization trap. For example, when we enter a query into the Google search bar, instead of clicking on the top results, or even the first page for that matter, we should click on the second page and start clicking on those links; that way we can avoid being trapped in their personalization algorithm because we can strongly assume that the top results will be 99% tailored for each individual based on their previous behavioral preferences. By understanding code we can understand more of our behavior on the internet, we can begin to understand how a site works, how data is gathered, retrieved, stored, and assimilated. If users don’t want to be in the passenger seat anymore and having the companies personalize their preferences, then the users should start sitting in the driver’s seat – Take control of your behavior, acknowledge and admit to yourself that tracking exists, and is unavoidable to a certain extent because even if you select the second page on a Google search result every time, Google may start personalizing this newly altered behavior of yours. However, by recognizing that such algorithms exists to impede on your privacy and to create this filter bubble of yours, we can at the least be able to avoid it for a period of time. Understanding the code behind how any algorithm is executed is crucial because a code is simply line(s) of text that does either X or Y. Codes are definitive and will not do something that isn’t defined within the lines of the text. So if we know the basics for how a generic personalization algorithm works, then we (the users) will have a chance to be able to take advantage of this knowledge and alter our behavior in order to avoid being filtered – we can outsmart the personalization algorithms!

Discussing the Reading for Tomorrow (April 25)

Just a reminder that for class tomorrow you should have read the encyclopedia entry on “Information.” If you need a copy of it again, you can find it in the Gale Virtual Reference Library.

Schement, Jorge Reina. “Information.” Encyclopedia of Communication and Information. Ed. Jorge Reina Schement. Vol. 2. New York: Macmillan Reference USA, 2002. 421-426. Gale Virtual Reference Library. Web. 24 Apr. 2013.

HW # 3: Combating the Filter Bubble by Camille Hart

Chapter 8 of The Filter Bubble delineated a host of possible actions individuals, the government and companies can take to combat the filter bubble and the downsides of personalization. One of the first ideas he put forth were for individuals to stop being a mouse. Meaning, to drop the routine of checking three or four websites a day and start “stretching your interests in new directions,” which gives the “personalizing code more breadth to work with.” (Pariser 223) I thought that this recommendation was the least likely to stop the filter bubble because it is impractical. Eli Pariser urges people to basically try new things that they otherwise would not be interested in. That way the filtering code won’t over-personalize you and ascribe specific information to you.

I don’t agree with Pariser because he is asking people to change their interests. He says, “Someone who shows interest in opera and comic books and South African politics and Tom Cruise is harder to pigeonhole than someone who just shows interest in one of those things.” (Pariser 223,224) And while this may true, the obvious truth is that people may not have so many diverse interests and therefore they would not search for such diverse interests. Pariser’s suggestion of stop being a mouse is impractical and would prove least effective in combating the filter bubble.

Another idea Eli Pariser puts forth is for companies and new filterers to “start making their filtering systems more transparent to the public, so that it’s possible to have a discussion about how they’re exercising their responsibilities in the first place.” (Pariser, 229) I think this is a good proposition but it is not quite the best. “There’s plenty that the companies that power the filter bubble can do to mitigate the negative consequences of personalization, but ultimately, some of these problems are too important to leave in the hands of private actors with profit-seeking motives. That’s where governments come in.” (Pariser 237)

The best idea Pariser presents to combat the filter bubble is for the government to require “companies to give us real control over our personal information.” (Pariser 238) I think this is the best solution because only we truly know our interest, inspirations, dreams, and beliefs whether we decide to share it with the world or not. We know what’s best for us; therefore we should be the sole determinants of our personalization. No algorithm will ever understand who we really are.

HW, Filter Bubble Chapter 8

For the first seven chapters of The Filter Bubble, we learn about all of the horrible consequences that occur when we are trapped in the bubble.  Over-personalization of the Internet puts us on isolated cyber-islands, where we are denied the information we need to hear.  Rather than becoming a location of serendipitous learning, the web has become a scene that is painted for each one of us, and usually incorrectly.  In chapter 8, Pariser lays out his plan of actions on how to pop this bubble of isolation.  Dividing up the responsibilities between what government, corporations, and individuals should do, he shows us how we can return to a more balanced cyber experience.  Two of these ideas stand out, one being an extremely clever and easy way to solve for the Filter Bubble, and one that is weak and impossible to enforce.

Why do we get trapped in Filter Bubble in the first place?  “Most of us are pretty mouselike in our information habits” (223).  What this means, is that we go to the same three or four websites multiple times a day.  Every time we open the computer, we have the identical routine of visiting the same string of pages.  Also, we rarely add a new page to the routine.  This means it is very easy for web services and data aggregators to zero in on our habits, and therefore deliver information to us that they think we’ll consume.  Driven by advertising revenue, they can form an image of what we think we are and subsequently bombard us with information that they think we want.  Pariser’s solution to this problem is extremely simple.

As individuals, the main way we can lessen the power of the Filter Bubble is to stop being mice. By breaking our little routines, we become impossible to pigeonhole, or become stuck in an image of what Internet companies think we are.  Visiting a large variety of websites about wildly different information expands our Filter Bubbles until they are so large, that we are hardly in an isolation bubble at all.  The reason this is such an effective solution for the Filter Bubble is how easy it is to accomplish.  As we’ll see, some other solutions require complicated government regulation or the voluntary cooperation of a multi-billion dollar corporation.  However, it is extremely easy to quit being a mouse.  All we have to do is read about a basketball game, a mathematical discovery, elections in the Philippines, and who Taylor Swift is dating this week.

Unfortunately, all of Pariser’s solutions are not as strong as “Stop being a mouse” (223).  Although it is relatively simple to have individuals work on breaking out of the Filter Bubble, it is extremely difficult to get big Internet companies to change their policies.  Driven by enormous advertising revenues, companies such as Google and Facebook won’t change just because they should.  This is especially true of one of the solutions that Pariser promotes, that Internet companies should become more transparent, and use the history of newspapers as a model.

Pariser writes, “Google and Facebook and other new media giants could draw inspiration from the history of newspaper ombudsmen” (230).  At a newspaper, an ombudsman answers to challenges of the newspaper’s reporting from the public.  They were put in place in 1970’s in order to boost the credibility of newspapers in the eyes of the public, and create an atmosphere of transparency.  There are many reasons why this is not a legitimate model for Internet companies to follow.  First, Internet companies are not facing any loss of business because of transparency issues.  In general, everything from Google News is relatively trusted, and Google’s profits are strong and consistent.  Next, is the manner in which the public interacts with digital media.  In the past, newspapers told everyone the news.  There was no interaction, or if there was questioning, it could not be corrected until days later.  Today on the web, we interact with our news.  We can comment on a questionable Yahoo News story, or choose not to share a friend’s news story on Facebook.

In general, it is up to us to solve for the Filter Bubble.  Although Pariser calls for corporate and government actions, it is our personal responsibility to not get caught up in a Filter Bubble.  No corporation is going to voluntarily change its policies until the revenues begin to dry up.  Instead, we should force them to not pigeonhole us, all while consuming diverse news at the same time.

 

 

Work Cited

 

Pariser, Eli.  The Filter Bubble: How the New Personalized Web is Changing What we Read and How we Think.  New York: Penguin Group, 2011.  Print.

A Little Something to Brighten Up Our Day

I am not a user of Tumblr nor am I a cat lady, but when I stumbled upon this article I thought to myself, why not share something pleasant with everyone.

Once again our Internet crazed society found something to obsess about, either if it’s a catchy song like ” Gangham Style,” popular dance like ” The Harlem Shake”, now there is a new tumblr alert and trend, this is where cats are compared as lookalikes to male models.  Des Hommes et des Chatons, http://deshommesetdeschatons.tumblr.com/.

Check it out.

Did we really become this lazy?

Ever since we watched the videos in class about the “Google glass”, I became fascinated with the idea but even more obsessed with the parodies. Who would have honestly thought that technology would evolve the way it did over the past decade. I have a little niece, who is almost 3 years old, she already knows how to use a cellphone, iPad, you name it. Adolescents now know how to use these items better than most adults do. It  frightens me to contemplate of what our future generation is being raised upon… technology wise. The “Google glass” may even be one of the utmost technologies created so far, but right now I am not adjusting to the idea. I stumbled upon this article http://www.wired.com/gadgetlab/2013/04/code-in-google-glass-features/ which is summarizing some of the things Google is trying to accomplish. “Google is working on a handful of cool new features for its smart frames, including two-finger touch-to-zoom and winking to take a photo.” Really? I don’t know if I am being skeptical or what the youngsters these day would refer to as “old school”, but I just can not wrap my mind over how lazy we as people may possibly convert into when even the simplest things like taking pictures with our hands may be taken away from our society.

Homework #2 on the Final Chapter in the Filter Bubble

Due April 25 (by the start of class)

Consider the suggestions that Pariser makes in the final chapter (8) of his book about ways that we can address the problems of the filter bubble. Then, on the course blog, write a long post (1-2 pages) in which you identify what you think is the strongest idea and the weakest idea that he has for combatting the filter bubble and the excesses of personalization. Fully explain and defend the two ideas you select and use quoted material from the book as part of your argument. You will be graded on the quality of your defense, the creativity in your writing, and the polish you put into your writing (i.e., you don’t want to lose points on this because of typos or grammar and spelling errors).

Can you hear me now? Cellphone turns 40.

Did you know that the first cell phone call was made 40 years ago.  The first cell phone ever made was in 1984 and it cost about $4,000, it had an LED display and took 10 hours to charge. If you want to try a more vintage look get rid of your iPhone or Android and buy one here.  After people became tired of holding a brick, the next big cellphone invention was the flip phone, described by Motorola as “about as thick as a fat wallet at the earpiece while tapering down to half the thickness of a deck of cards at the mouthpiece.” And who can forget the Nokia ringtone, very well progressed during the years (listen here) a dub step version really? The next big thing after the flip phone was the camera phone, they weren’t sold until the year 2002 in the United States, Sony Ericsson’s T68i with its clip-on camera being amongst the first. Before the IPhone and Blackberry were top sellers, there was another phone that everyone had, I remember I got it as my first phone in pink, the Razr. Motorola’s slender, square Razr series, first launched in 2004, was such a runaway hit and sold 50 million phones in the first two years. After the average cell phone era had its run, the smartphone era took over, BlackBerry’s 5810, was the very first Blackberry device to get a cellular connection. The Palm TreoW, also a pocket assistant, was the first phone to run a Windows mobile operating system. These phones started to smudge the line between computer and phone. Last but not least, in 2007 came something that would reinvent a simple phone, the IPhone, an iPod, phone and internet communicator in one device.  Since then, flat, skinny smartphones from Nokia and Samsung and HTC  have reconfigured our expectations of a smartphone, and they are far from what was the first phone. So what do you think the next phone innovation is going to be? Assuming most of you have smartphones, how has it made your life easier/harder ? Could you live without it?

 

 

Congrats we are Behavioral Study Lab Rats!

http://vator.tv/news/2013-04-13-google-picks-up-behavioral-sensing-company-behavio

Yay!! so in order to upgrade and make Google glass practical, Google acquired the company Behavio…this is a company which uses peoples phone signals in order to track human behavior and then make an ‘educated determination’ to predict what we will do next. I used to think all this technology was in order to make our lives easier that was up until I realized what the hell is going on, I kinda want to escape from this world where Im feeling more like a game pawn and lab rat than a person with feelings, thoughts, and ideas. Seriously this Behavio company seems like they are going to make our Filter bubbles even more solid and impossible to break out of or anything new to break into them.

Thoughts anyone? Am I overreacting?

My digital identity after I die? You gotta be kidding…

http://www.computerworld.com/s/article/9238354/Google_lets_users_plan_their_digital_afterlife

“You can tell us what to do with your Gmail messages and data from several other Google services if your account becomes inactive for any reason.”

“We hope that this new feature will enable you to plan your digital afterlife — in a way that protects your privacy and security — and make life easier for your loved ones after you’re gone,”

 

Uhm ok, really? This really really got to me, its not enough that they want to control what we see what we buy and who sees us while were still kicking they want to control our ‘data’ after we die? This all seems a little absurd to me. Reading this article doesnt even make me feel human anymore, just another line of 0’s and 1’s int heir computers. What if I dont want anything to happen to my emails when I die, and I dont want there to be an automatic response sent to my friends, and family, my coworkers, my employer.

“Just in case you’re inactive and not actually dead, Google is set to send you a warning via an email to a secondary address, and a text message to your cellphone.”

How nice of them, in all honesty I would love to get a text from google saying “If you are alive please reply by texting “ALIVE” back to this number”…what if I am away and on a lengthy sabbatical for 9 months and change my number? Then according to Google i’m dead? woohoo good to know.

 

 

Anyone gonna use this feature? Does this actually make anyone happy to know google offers this?

Google,Samsung,Apple? Now, Soon, Later?

http://www.forbes.com/sites/anthonykosner/2013/04/14/phone-plan-google-now-samsung-soon-or-apple-later/?partner=yahootix

This article I came across seemed rather interesting, me being obsessed with my Samsung Note 2 I am not even remotely excited about any of the phones listed. When getting the Note 2 i saw the nexus and was not impressed, the samsung 4 that is coming out isnt all that crazy and seems like its open to alot of glitches, and finally the iphone is extremely played out. As a apple veteran I can honestly say the iphones no longer excite me, on the contrary since converting to Samsung I get easily irritated by the limited things that Apple offers. Anyway I thought this was interesting that the google phone wasnt selling as much and that people are waiting for the other two.

Thoughts? Would you wait for the Samsung 4, the iphone “5S”/”6”, or go with the google phone?

Facebook Refines Ad Targeting

Based on what I have read from this article, it is clear to see that Facebook is simply trying to increase their advertising revenue from last year. Facebook rolled out a new way to advertise and market products via Facebook. By partnering up with data companies that track online and offline purchase behavior, (Acxiom, Blue Kai, Epsilon, and Datalogix) these Facebook partner categories are able to predict what consumers purchase the most and what consumers would buy again, based on previous purchases. By simply swiping or entering the 16 digits on your credit card, we are basically handing out a whole lot of information to companies without even knowing it. Talk about invasion of privacy. This relates to Pariser’s book heavily, seeing as how social networks are finding more and more concealed and profitable ways to obtain personal information. Coming from a marketing perspective I would say that this is a clever and a different way to increase revenue. However coming from someone who uses Facebook on a daily basis, I would say that this completely invades our privacy, even more than how it already does.

Any thoughts?

Source: http://bits.blogs.nytimes.com/2013/04/10/facebook-refines-ad-targeting/

Team 4: Facebook and Racism

Megan Garber’s article, “When  your Facebook friend is Racist” had a lot of connections to Eli Pariser’s Filter Bubble. The first connection we saw was in chapter six of the book on page 174, when Pariser explained that “how we behave is dictacted in part by the shape of our environment.” In this quote, Pariser explained that architect Robert Moses was able to regulate people’s behavior through the use of bridges and tunnels. He designed the bridges and tunnels speifically to keep low-income families out of Jones Beach. This relates to the article because Facebook is designed in such a way that if you use Facebook often you are “more positive towards racist content.” For example, Facebook is structured in a way that allows users to “Like,” “Recommend” or “Share”, but not dislike or reject content. So if you come across a racist message you may not like it, but at the same time you are not able to express your dislike. You don’t have that option.

Another connection between Pariser and the article is what Pariser calls the “God Impulse.” (page 167) This is the idea that creative people feel empowered after they create or discover things, they feel as thought they have built their on realm or universe that you can control. Facebook users may feel the God impulse when they create their Facebook page. They have the ability to decide what comments and pictures they will post. Facebook users feel like they can say and do whatever they want on their page.They can also decide who to let into their page. Facebook users are able to influence the behavior of others because their friends are able to view their profiles. If their friend is saying negative or racist statements the other friend may become complacent about it.

Group 1-When Your Facebok Friend Is Racist and The Filter Bubble

We found two connections between the article and The Filter Bubble.

First, on page 235, Pariser discusses how Facebook should add an “Important” button.  Combined with the “Like” button, this collaborative filtering will allow news stories to appear in people’s newsfeeds, even though it might not be something they like.  Even by seeing the headline, they are made aware of an important news story.  In the article, When Your Facebook Friend is Racist, Megan Garber asks, “What will happen if information gets fully social(?)”  If we rely on personalized information, and only “Likes,” then we will not be shown news stories that are important or contrary to our beliefs.  Also, there is a chance of beng shown stories that are racist, as described in the article.  By adding an “Important” button, personalized news feeds can show the most crucial stories.

On page 154, Pariser discuses how its easier to agree and like certain viewpoints of friends due to the fact that they are the indivduals you usually agree with in the past. So the past interactions and talks with your friends affects the present and makes it so that you are more inclined to agree to their viewpoints. The indivdual on your Facebook friend list whom is also a close friend of yours posts a message referring to racism in america, it is unlikely you would just skip over it and more likely you will “like” it for the sake of not disagreeing with them and being in the “outter circle” An event such as this Pariser explains could be a threat to public life itself, because then these Facebook friends would simply be blinded by opposing viewpoints and your rational would be limited to the words of their friends, potentially manipulating your own view.

Facebook Team 5

Frequent Facebook users are more likely to be influenced by persuasive messages on Facebook than less frequent users. The design is such that we can only filter things we like and if we were to dislike something on Facebook we would have to either unsubscribe or defriend someone to not see their posts. Pariser describes Facebook as a design that is more geared towards positivity than negativity in order to avoid confrontation and keep people thinking that they’re engaged when they’re only option is to agree or ignore rather than voice their opinion with a dislike button.  Facebook creates an “atmosphere of agreement” as the article points out, filtering out disagreement, and possibly critical thought. The article seems to imply that we are more likely to want to connect and agree with our friends rather than criticize them and their opinions/posts. There’s a tendency to rely on Facebook’s news feed to get your news but the news feed is far too biased based on your previous “likes” and the “likes” of your friends giving you a very limited filter of how you can possibly view the world and news.

 

Group 5 ~ Google Glass No – No’s

Although, Google Glasses have a strong positive aspect on innovation, there are many major setbacks on owning this new invention:

  1. Lack of regard for what is going on in your surroundings.
  2. Obscures the vision of the user.
  3. May cause major accident while driving.
  4. People stealing them off your face.
  5. Too much radiation to the brain.
  6. People over hear private conversation, such as directions, private places, etc
  7. People may try to get into personal space bubble, sharing of personal information like address, workplace, etc.
  8. They’re ugly, non-fashionable, horrendous.
  9. TOOO expensive
  10. Impractical because you cannot get wifi everywhere and even if you were to pay for the service(4g) there is a lot of places that do not get signals such as trains, and the service itself would be too expensive.