Author Archives: Mike B

Summary of Activity on this Site


Number of Posts: 15
Number of Comments: 11

About Mike B

5081190214495459

Homework #5

In this class, we learned many things that can be applied to our lives.   We learned how to have a better personal experience on the web by popping our “Filter Bubble”.  We learned about the amazing benefits and bizarre drawbacks that our society experiences as we become more reliant on digital technology for information.  However, the most important things we learned can be applied to courses here at Baruch.  Even though I am graduating, the things we learned in this class I could apply to the courses I took in the past as well as what I will do in the future

As some others have written about, learning about the various databases as well as how to search within them was a huge part of the course.  Had I known about Thompson One, my research papers throughout my time here could have contained much more in depth information.  More importantly, I think learning how to search within a database or search engine was one of the best parts of the semester.  I think many students jump right into random searches when performing research.  However, it is crucial to first consider if you’re asking the right questions.  Also, it is important to consider every word and phrase that pertains to the topic at hand.  These techniques allow for students to perform actual research, rather than rely on slightly relevant Google search results.  Google is part of the second thing we learned this semester as well.

Before this class, I had a vague idea that companies such as Google and Facebook were collecting information about us, but I had no idea of the extent.  I used to be shocked when I went to a website and saw an advertisement for something I had been previously shopping for on Amazon.  Now, this seems like a simple and comical example of how the websites we use daily use our personal information.  Because of the course and the Filter Bubble, we now know that the web companies we use the most use our information to generate revenues.  This is relevant to being a business student at Baruch, not only because of how it affects us personally, but it shows how American industry is now based on information.  No longer is our economy dominated by manufacturing or agriculture, it is centered on who can collect the best information, and who can use it the best.

One way we can collect and utilize information online is through a Wiki.  Learning the inner workings of a Wiki, and how to construct and manipulate one is the third important thing we learned in this class that can be applied to other classes at Baruch.  I was amazed at how a few individual hours of each member of the class added up to an impressive collection of data on a topic that was relatively difficult to research.  This would be a great way to display research in other classes.  We were asked many times to perform research in small teams and present it to the class.  However, having a whole class Wiki allows much more in depth research that can be shared with anyone easily.

Had I known about these three things, not only would I have been able to perform better research, I would’ve understood the importance of information in our society today.   Luckily, I know that these things will still be useful in my future career and personal life as well.

 

 

Wiki HW

The first change I made was too add a CitiBike station to the “Desired Amenities” page.  Even though there will be two stations a few blocks away from the plaza, there should be a CitiBike station right on campus.  This will provide maximum convenience for students and faculty.  I also added that food trucks should have access to the plaza, to allow for a greater range and better quality of street food to be offered.  Next, I added more information to the “Government Info about the plans” page that I had begun in class.  I emphasized some of the conditions on which Baruch was granted permission to build the plaza.  I also introduced the Plaza Program NYC.  Lastly, on the “Desired Uses” page, I added photo of the recent Spring Fling to add to the thorough writing that is already on the page.

Yelp for the DMV?

There’s a great interview from last month’s Fortune Magazine with the Lieutenant Governor of California, Gavin Newsom.  He has taken his experience from the restaurant industry and wants to apply it to running a government.  More specifically, he discusses how Yelp changed the restaurant industry.  Restaurants went from serving customers how they wanted, to fearing bad reviews online.  This meant that the diners were now participants in the restaurants success, rather than the subjects of the restaurants desires.  Newsom thinks this is applicable to government because todays American citizen is more of a subject of government, not a participant in it.  He says that things are done to us, not for us.

I think this is a brilliant idea, but one that is difficult to implement.  It is smart because many Americans think that government can solve issues by throwing tax dollars at them.  This is not necessarily true, as some issues require better and smarter solutions, not gobs of money.  Also, there should be more accountability in government services, and it can be achieved in a “Yelp” like way.  For example, the DMV’s in the NYC area should all be reviewed online by users, and then rewards and punishments can be distributed accordingly.

The issue with this, is it is extremely hard to implement it on a large scale.  Sure, its easy to review the service at the DMV, or how clean your local county park is.  However, what happens if the president or congress get bad online reviews?  Do we just kick them out?  So in general, I think that Newsom’s idea is great for small government services, but far from revolutionary.  Thoughts?

HW, Filter Bubble Chapter 8

For the first seven chapters of The Filter Bubble, we learn about all of the horrible consequences that occur when we are trapped in the bubble.  Over-personalization of the Internet puts us on isolated cyber-islands, where we are denied the information we need to hear.  Rather than becoming a location of serendipitous learning, the web has become a scene that is painted for each one of us, and usually incorrectly.  In chapter 8, Pariser lays out his plan of actions on how to pop this bubble of isolation.  Dividing up the responsibilities between what government, corporations, and individuals should do, he shows us how we can return to a more balanced cyber experience.  Two of these ideas stand out, one being an extremely clever and easy way to solve for the Filter Bubble, and one that is weak and impossible to enforce.

Why do we get trapped in Filter Bubble in the first place?  “Most of us are pretty mouselike in our information habits” (223).  What this means, is that we go to the same three or four websites multiple times a day.  Every time we open the computer, we have the identical routine of visiting the same string of pages.  Also, we rarely add a new page to the routine.  This means it is very easy for web services and data aggregators to zero in on our habits, and therefore deliver information to us that they think we’ll consume.  Driven by advertising revenue, they can form an image of what we think we are and subsequently bombard us with information that they think we want.  Pariser’s solution to this problem is extremely simple.

As individuals, the main way we can lessen the power of the Filter Bubble is to stop being mice. By breaking our little routines, we become impossible to pigeonhole, or become stuck in an image of what Internet companies think we are.  Visiting a large variety of websites about wildly different information expands our Filter Bubbles until they are so large, that we are hardly in an isolation bubble at all.  The reason this is such an effective solution for the Filter Bubble is how easy it is to accomplish.  As we’ll see, some other solutions require complicated government regulation or the voluntary cooperation of a multi-billion dollar corporation.  However, it is extremely easy to quit being a mouse.  All we have to do is read about a basketball game, a mathematical discovery, elections in the Philippines, and who Taylor Swift is dating this week.

Unfortunately, all of Pariser’s solutions are not as strong as “Stop being a mouse” (223).  Although it is relatively simple to have individuals work on breaking out of the Filter Bubble, it is extremely difficult to get big Internet companies to change their policies.  Driven by enormous advertising revenues, companies such as Google and Facebook won’t change just because they should.  This is especially true of one of the solutions that Pariser promotes, that Internet companies should become more transparent, and use the history of newspapers as a model.

Pariser writes, “Google and Facebook and other new media giants could draw inspiration from the history of newspaper ombudsmen” (230).  At a newspaper, an ombudsman answers to challenges of the newspaper’s reporting from the public.  They were put in place in 1970’s in order to boost the credibility of newspapers in the eyes of the public, and create an atmosphere of transparency.  There are many reasons why this is not a legitimate model for Internet companies to follow.  First, Internet companies are not facing any loss of business because of transparency issues.  In general, everything from Google News is relatively trusted, and Google’s profits are strong and consistent.  Next, is the manner in which the public interacts with digital media.  In the past, newspapers told everyone the news.  There was no interaction, or if there was questioning, it could not be corrected until days later.  Today on the web, we interact with our news.  We can comment on a questionable Yahoo News story, or choose not to share a friend’s news story on Facebook.

In general, it is up to us to solve for the Filter Bubble.  Although Pariser calls for corporate and government actions, it is our personal responsibility to not get caught up in a Filter Bubble.  No corporation is going to voluntarily change its policies until the revenues begin to dry up.  Instead, we should force them to not pigeonhole us, all while consuming diverse news at the same time.

 

 

Work Cited

 

Pariser, Eli.  The Filter Bubble: How the New Personalized Web is Changing What we Read and How we Think.  New York: Penguin Group, 2011.  Print.

Group 1-When Your Facebok Friend Is Racist and The Filter Bubble

We found two connections between the article and The Filter Bubble.

First, on page 235, Pariser discusses how Facebook should add an “Important” button.  Combined with the “Like” button, this collaborative filtering will allow news stories to appear in people’s newsfeeds, even though it might not be something they like.  Even by seeing the headline, they are made aware of an important news story.  In the article, When Your Facebook Friend is Racist, Megan Garber asks, “What will happen if information gets fully social(?)”  If we rely on personalized information, and only “Likes,” then we will not be shown news stories that are important or contrary to our beliefs.  Also, there is a chance of beng shown stories that are racist, as described in the article.  By adding an “Important” button, personalized news feeds can show the most crucial stories.

On page 154, Pariser discuses how its easier to agree and like certain viewpoints of friends due to the fact that they are the indivduals you usually agree with in the past. So the past interactions and talks with your friends affects the present and makes it so that you are more inclined to agree to their viewpoints. The indivdual on your Facebook friend list whom is also a close friend of yours posts a message referring to racism in america, it is unlikely you would just skip over it and more likely you will “like” it for the sake of not disagreeing with them and being in the “outter circle” An event such as this Pariser explains could be a threat to public life itself, because then these Facebook friends would simply be blinded by opposing viewpoints and your rational would be limited to the words of their friends, potentially manipulating your own view.

Group 1- Consequences of Google Glasses

We found a lot of the problems of the Google Glasses come from its ability to secretly photograph.  There is a huge potential for copyright infringements at places like movie theaters.  Also, it makes it easier for someone to commit identity theft, such as recording someone entering their pin number at an ATM.

Two more consequences come from problems that already exist with cell phones, yet to an even greater extreme.  The first is social disconnect.  When in a social situation with friends or family, it will be way too tempting to resort to being entertained by the glasses.  Also, there will be problems with over reliance.  Once the user is used to using the GPS while driving, or the video chat to connect with friends, they will not be able to function without the technology.

The glasses will also exacerbate a problem that Pariser mentions, the lack of serendipity.  In the filter bubble, we are prevented from seeing things we might not like when it comes to web pages and the news.  Now with Google Glasses, that same effect will happen in real life.  We will not try that new, different restaurant because Google will not think we would like it.

Technology Encountered Today

Communication Technologies:

Email

Text Messaging

Cell Phone Calls

Facebook Messaging

 

Household Technologies:

Programmable Coffee Maker

Alarm Clock App

Digital Clock

 

Entertainment Technologies:

Videogames on Xbox 360 and Ipad

Podcasts

Itunes Music Store

Streaming Radio App

Cable Box

Hd Television

Remote Control

 

Education Technologies:

Online newspaper

E-Textbook

Blog website

 

Transportation Technologies:

Metrocard Reader

Subway

 

 

 

Smart Technology: Intended Benefits or Unintended Misuses

This article is extremely relevant to what we discussed in class a couple of weeks ago.  When there is technological innovation, there are both intended and unintended consequences.  Do the acceptable benefits of a new technology outweigh the possible unintended misuses?  In this Wall Street Journal article, they use the example of the BinCam.  BinCam is an example of a new “smart” technology that includes sensors and cameras on everyday objects.  What BinCam does, is every time you close your kitchen garbage can, it snaps a photo.  This photo is then analyzed by a web service.  You are then given points for being “green” and recycling things you’re supposed to or having points deducted for disposing of recyclables in the trash.  Then the photo is posted on your Facebook account.  This sounds like something that is good for the environment, how could there be unintended misuses?

The author describes how these new “smart” technologies are going to become more invasive into our lives.  Soon, it will not be competing for recycling points against your Facebook friends.  There will be smart forks to tell us if we’re eating too fast, smart toothbrushes to tell us to brush more and smart kitchens to tell us that two ingredients don’t go together.  What is wrong with this?  The answer is out loss of autonomy.  Humans aren’t creative and responsible because we’re told what we “should” be doing by technology.  We are creative and responsible because we make mistakes, try new things, and generally enjoy doing things we shouldn’t do from time to time.

For now is smart technology like BinCam is mostly “good” smart.  That means that although the technology can deduct points, the user still has the option to disregard it all together.  But what happens when there are smart technologies that can’t be avoided?  The author describes these as “bad” smart.  Even though these bad smart technologies sometimes have good intentions, it completely removes the free will of a human user.  These choice removing technologies for now are driving sensors and facial recognition sensors.  However, there are endless possibilities in the future for technologies to be developed that remove the choice of a human, and the consequences won’t be so beneficial.

Would any of you use a “good” smart technology, such as BinCam, a scale that tweeted your weight to your followers, or a pill bottle that “pings” the pharmacy when your medication is low?

What other intended benefits or unintended misuses can you see coming from smart technology?

You are what you like. Or are you?

We’ve been discussing how Google forms an identity of you through your click signals, and how Facebook forms an identity of you through connections and sharing. What about what you like on Facebook?  Two British men have made a website and algorithm called YouAreWhatYouLike.  They claim that they can map your personality according to the things you’ve liked on Facebook, whether its musicians, politicians, movies, etc.  To do this, they divide the human personality into five areas; Openness, Conscientiousness, Extraversion, Stability, and Agreeableness.  Then using your likes on Facebook, they generate your specific personality in each of the five areas.

As we’ve seen in other places, my identity created through my Facebook Likes is inaccurate.  The one area of personality they got right for me is Extraversion.  I thought I would be the perfect candidate for this test because I Like hundreds of pages on Facebook.  I Like every musician that has ever had even one song I’ve liked, every place I’ve visited on vacation, and even my favorite hot sauce.  Maybe this mass of information made it harder for the test to get a solid image of my personality.  Is the test accurate for you?

One of the issues of this web site is the manner in which it is presented.  Pariser writes about how people are fluid, we change our personalities based on our mood and situation.  So, when YouAreWhatYouLike tells me that my Stability is “Calm and Relaxed,” the only answer is sometimes.  True, I am generally calm, yet there are plenty of times I get stressed.  This method of mapping a personality is like a horoscope,  it is full of truisms, or statements that everyone wants to believe are true.

Google’s Superiority Through Privacy

What is a more accurate image of your identity?  Is it one that you can create, to convey your ideal-self to the world?  Or is it your private self, the one who searches for an array of topics on the web under a cloak of privacy?  Facebook creates its identity of you through what you share and what you like.  If Facebook see’s you like The Terminator and The Predator, its image of you is one that likes action movies.  On the other hand, Google uses click signals to create an identity of its users.  These private interactions aggregate into a huge mass of data that Google can make inferences about your identity on.  This mass of data is much more accurate at capturing who you are, and there are many reasons why.

Eli Pariser writes, “Facebook’s share-based self is more aspirational: Facebook takes you more at your word, presenting you as you’d like to be seen by others.”  This is important for two reasons.  First, by using a share-based method of creating identity, a conscious user can avoid having a part of their true identity becoming part of their online identity by not Liking or Sharing it.  This method also has a drawback for Facebook.  Since the goal of creating an online identity is to receive personalized advertising, this ideal-self identity could prevent personalized ad’s from reaching you.  For example, if you’re a guy who doesn’t want his friends to know which television shows are his guilty pleasures, he simply does not have to like them on Facebook.  However, Facebook’s advertising clients lose out on a customer.

So how is Google’s, share–based method more accurate at creating an identity of its user?  With Google, the user does not have to Like or Share a topic to indicate interest in it.  What this means is that everything the user searches for get added into an enormous database of personal information.  Google users search for things that they don’t Like on Facebook all of the time.  This leads to another important reason Google’s identity creation is more effective, privacy.  Pariser writes, “These clicks often happen in an entirely private context.”  To use our previous example, Google would know that our male user was a fan of reality television, because privately, he could have searched for an episode recap or information on a character.  The world does not see every Google search a user makes, and that is what makes it a more powerful method for creating an identity.

The truth about Facebook and Google is that they are both relatively inefficient at creating identities of users.  Pariser writes about the “uncanny valley,” which is situation where something appears to be lifelike but is not.  This characterizes our online identities, and they creep us out.  Another error that Facebook and Google make is they believe in the “Fundamental Attribution Error,” or the fact that we only have one identity.  The truth is that humans are fluid, our identity changes from situation to situation (family dinner vs. out with friends), and over time as well.  That being said, Google is the better tool for creating identities of the users because they can search for anything without having to consider their ideal-self.

Love in the Time of Algorithms

There was a very interesting interview with Dan Slater in last weeks Wall Street Journal. He has written a book called Love in the Time of Algorithms, which is a complete analysis of the online dating industry (eHarmony, Okcupid, etc.)  I thought that this topic was relevant to what we’ve been reading for two reasons.  The first reason is timing.  The interview touches on how the online dating industry is about to be rocked by Facebook’s graph search.  Their experts are anticipating that people looking for love interests will be utilizing the powerful Graph search for free, rather than paying for expensive monthly subscriptions to their dating services.  The second reason Slater’s book is relevant is a little more complicated and it has to do with the methods that information is delivered to customers of online dating services.

I always wondered how the business model of eHarmony was successful.  If they perform their job well one time, they are rewarded by losing two customers.  How can they make money if by doing their job they lose business?  Slater explains that they have to deliver inefficient and efficient information to their customer.  They have to deliver efficient information (legitimate dating prospects) in order to satisfy and retain the customer.  However, if all they did was deliver efficient information, the customer would find a match and quit before paying for a few months of subscription fees.  The answer to this is inefficient information.  These inefficiencies are calculated by computer programming, and presented to the customer as profiles of members who don’t use the site anymore, or people who have only created a free profile without in depth information.  These fake dating prospects keep the customer distracted and engaged in the service, all while they are paying their monthly subscription fees. By presenting their customers with these dead ends, the computers programming of the online dating services keeps the business profitable.

I have never used an online dating service, but I was wondering if anyone in the class has?  If so, were you presented with inefficient information?

Here’s a link to the interview.  If it requires a subscription login, I can pull it up in class for anyone who wants to check it out.

Love in the Time of Algorithms 

Privacy Information on the Web

Author and biotechnology expert Lori Andrews was a guest on one of my favorite radio shows last week.  Her book, I Know Who You Are and I Saw What You Did: Social Networks and the Death of Privacy, discusses many of the same things Eli Pariser does, yet she looks at it through the lens of privacy rights.  Although her philosophies are a little sensational and extreme, she touched on two things I found very interesting.  First, she discusses how information on the web is moving from the public sphere to the private sphere.  Her example was Blueservo.  Blueservo is a web service where normal citizens can become virtual deputies, and watch webcams of the Texas-Mexico border.  If they see illegal immigrants crossing the border, they can report it to the authorities.  Andrews says that this act of public work (policing) being done by private citizens will just expand to many more areas such as neighborhood watches.

The second thing that Andrews discusses is how eventually we will lose so much trust in the internet that we will stop using the great things the web does.  Everything from credit card companies, employers, schools, and the government now have to ability to make decisions on our lives based on what we do on the internet.  If this trend continues to get more invasive, then a point will eventually come that the scale will be tipped and we’ll stop using the web for great things such as crowd funding and medical diagnoses and support.  Where do you think this point is?

Lori Andrew’s Web Page

Blueservo

Team #1 on Sources in The Filter Bubble

We categorized the sources from the introduction to The Filter Bubble by sorting them by their type.  For example, we separated the books, magazines, etc.  We chose this method for its simplicity.

 

Books 8
Web Article 6
Newspaper 2
Interview 6
Blog 8
Magazine 2
Internet News 3
Book Review 1
Press Review 1
Staff Report 1
Law Journal 1
Total 39

Capture

Example Citations:

Book

The Facebook Effect: The Inside Story of the Company That is Connecting the World (New York: Simon and Schuster, 2010), 296.

Web Article

ReadWriteWeb, June 26, 2009, accessed Dec. 19 2010, www.readwriteweb.com/archives/they_did_it_one_team_reports_success_in_the_1m_net.php

Newspaper

Julia Angwin, “The web’s New Gold Mine: Your Secrets,” Wall Street Journal, July 30, 2010, accessed Dec. 19, 2010, http://online.wsj.com/article/SB1000142405278703940904575395073512989404.html

Interview

Danny Sullivan, phone interview with author, Sept 10, 2010

Blog

A Day in the Internet,” Online Education, accessed Dec. 19, 2010, www.onlineeducation.net/internet.

Magazine

Richard Behar, “Never Heard of Axciom? Chances are its Heard of You.” Fortune, Feb 23, 2004. accessed Dec 19, 2010, http://money.cnn.com/magazines/fortune/fortune_archive/2004/02/23/362182/index.htm

Internet News

“Ovulation Hormones Make Women Choose Clingy Clothes,; “BBC News, Aug. 5, 2010, accessed Feb.  8, 2011, www.bbc.co.uk/news/health-10878750

Book Review

James Bamford, “Who’s in Big Brother’s Database?,” The New York Times Review of Books, Nov 5, 2009, accessed Feb 8, 2011, www.nybooks.com/articles/archives/2009/nov/o5/whos-in-big-brothers-database

Press Review

Cass Sunstein, Republic.com 2.0. (Princeton: Princeton University Press, 2007)

Staff Report

“Preliminary FTC Staff Privacy Report, “remarks of Chairman Job Leibowitz, as prepared for delivery, Dec. 1, 2010, accessed Feb. 8, 2011, www.ftc.gov/speeches/leibowitz/101201privacyreportremarks.pdf

Law Journal

Yochai Bentler, “Siren Songs and Amish Children: Autonomy, Information, and Law,” New York University Law Review, Apr. 2001.

NY Times Facebook Search Article

The article was difficult to find because I used the incorrect search terms on the New York Times page, as well as the database through the library.  I had to be shown the correct search terms, and I found the article on the NY times page.

Somini Sengupta is a graduate of UC Berkeley who has received a prestigious award for reporting.  She has extensive history working for the New York Times, working in West Africa and India.  She has been writing about technology for the Times for the last year and a half.

People in the Article:

Kathryn Hymes- former masters student at Stanford

Amy Campbell- Doctorate of Linguistics from Berkeley

Loren Cheng-  Expert in natural language processing

Clifford I. Nass- Communication professor at Stanford

Keywords and Ideas:

Human Behavior, communication, search engines, homophily, restructuring code, understanding, algorithms, psychology



Comments:

"This was a very interesting article, but I completely disagree with the author. He has somehow tried to shoehorn Yelp into supporting his argument about how we are all information conformists on the web. Clearly, he is totally unfamiliar with Yelp. First, who actually bases decisions on a total Yelp score? Experienced Yelp users know that you need to read all of the bad reviews of an establishment, because the criteria they have been judged poorly about may be something you don't care about. For example, many places get slammed for having a cash only policy, not accepting reservations, or for taking five minutes to get a drink order filled. These things for me do not detract from a restaurant experience one bit. Another mistake the author made was using one example, Momofuku Ma Peche. In general Yelp is extremely accurate. I have almost never had an experience where my opinion is drastically different from Yelpers with similar criteria as myself. Lastly, Yelp has changed the restaurant industry. Now, restaurants know that one horrible service, or one case of food poisoning will result in reviews that every single prospective customer can see. I'll continue to be a loyal Yelp user, although I do think it is fun to try a new place and look it up afterward from time to time."
posted on May 13, 2013, on the post Yelp! Sometimes no Help!

"I agree with you that this is a poor solution. The California state senate is wasting time and effort on this. First, it is impossible to implement. How is Facebook supposed to magically create a delete button, then prevent other people from reposting, and making it exclusively available to minors? This is way too hard to enforce, and is a waste of time. Also, I believe that there is nothing inherently wrong with a minor making a mistake online. By dealing with the consequences of their controversial post, they will learn to become smarter web users and gain a sense of responsibility. By shielding them from accountability online, they will develop poor cyber habits."
posted on May 13, 2013, on the post Bill protecting kids’ online privacy advances (The Eraser Button)

"I think this is a great attempt by Canon to save themselves from the sinking pocket camera industry. This camera is a really innovative piece of technology. Unfortunately, it may be a piece of technology with no market. I think there are two huge problems. First, who wants to pay $300 for marginally better photographs? Somehow Canon really needs to get the price down so that young people can easily afford them. Another problem is having this pocket camera means having to carry another item around. One of the best benefits of the phone camera is that it is with you all the time without having to carry another device. Considering these two problems, and the fact that smart phone cameras get better by the minute, Canon may want to jump ship and stick with the professional photography equipment."
posted on May 7, 2013, on the post Canon’s comeback against the smartphone

"Your post brought back some old memories! Around 1997 , my grandmother lived out in the middle of the woods. Since my mom was worried about her, she bought her the most state of the art cell phone available at the time, the Motorola StarTAC. I remember it seemed like such cutting edge technology at the time, and now its funny to imagine that. I would say at this point I could not live without my Smartphone/Ipad because I've created a life where I need to rely on always being connected. I know I can put off paying my bills at home because I can use my bank app between classes. Also, just the other day I knew that I could wait to get Nets tickets until right before the game, because I knew I could use stubhub on my phone. If this connectivity was taken away, I would have to greatly alter my habits and life. I hope the next phone innovation is HD Projection. A lot of phones can now play movies from Itunes and Netflix. I have always wanted the Iphone to have a projector that allows you to play those movies on a television sized screen, no matter where you are."
posted on Apr 25, 2013, on the post Can you hear me now? Cellphone turns 40.

"I agree with you 100% that technology like this can lead to us all becoming the characters from Wall-E. However, this article answers some of my biggest criticisms of the Google Glass. When we were studying them in class, one of the biggest problems I found with them is that in order to engage certain functions, you literally have to talk to your glasses. Could you imagine an entire train or sports stadium with everyone screaming "Ok, Glass! Take a picture!" At least adding the wink-to-click photo feature allows them to be used in a more subtle manner. That being said, I'll take my regular camera any day."
posted on Apr 25, 2013, on the post Did we really become this lazy?

"This was a very interesting article. I thought one of the most interesting parts was the point that even though the advertising is on a illegal website, it was still aimed at the target market. Even though content piracy services are illegal, ads for the Military still showed up on a site primarily used by 21-25 year old males. Maybe the companies we "trust" so much know that they can defer blame or play dumb if an online watchdog calls them out for advertising on an illegal site. Blaming the Ad Council seems like too convenient of an out for them, and it makes them seem shady to me. If it was really unintentional, they should accept responsibility and make sure the ads cease."
posted on Apr 10, 2013, on the post “The online ad business is what we would call a ‘dark market'”

"As it currently sits, Facebook is here to stay. It it too effective at connecting people with their friends and their interests. Facebook as a business has two huge competitive advantages. First, is it quantity of users. Facebook did a masterful job of turning software that was used by college students, into a network that is used by everyone. The fastest growing age demographic for Facebook is not in their 20's, it's people 30 and above. This huge user base will keep them going for the distant future. The other competitive advantage that Facebook has is the easy interface. MySpace didn't die because it got old, it died because Facebook's user interface makes everything else obsolete. You both mentioned how Facebook has competition from Twitter and Instagram, but their software can't do half the things that Facebook does. That being said, Facebook does have two risks to its future. The first is it becoming "MySpaced." Just like Facebook made MySpace obsolete, any one of the number of new social networks that come out every year could be the one that does it to Facebook. The second risk for Facebook is not new. They are constantly struggling on how to turn their huge user base into revenue. The advertising is getting more personalized and invasive, so maybe there will be a tuning point where users become fed up and leave."
posted on Mar 21, 2013, on the post Can Facebook ever become irrelevant?

"I think that Pariser would find Rushkoff’s concerns 100% legitimate. The users of Facebook are definitely the content, and Facebook is keen on capitalizing on that. The biggest concern about Facebook when they had their IPO was how they would improve their manner on capitalizing on their huge network of data and user’s. These secret recommendations are just a new way that Facebook is working to bring in revenue. I think the outrage of Facebook doing things like this comes from the serious manner that social networks are being used for today. Many people use Facebook for political and business reasons. For Facebook to be user friendly, the users have to be using it as something less serious. If we use Facebook to post pictures of our pets, reconnect with an old friend from high school, or chat about a great old movie, then how bad are the consequences that can come from Facebook mining our data? No consequences can come from people seeing that you liked "The Avengers." The problem is, if you post a status about a heated government issue, or something otherwise controversial, then there are serious consequences if Facebook associates that with you to other people behind your back."
posted on Mar 5, 2013, on the post When Your Likes on Facebook Spiral Beyond Your Control

"Very interesting, I can't believe I've never heard of this before. I would say that this is definitely relevant to Pariser’s book because of what a Klout influenced web will become. If everyone has to tweet every 30 minutes in order to achieve their maximum Klout score, then there will be a mass of information on the web even bigger than what we deal with today. There will be greater needs for personalization, because our attention windows will need to be even more focused. I hope that Klout scores will stick to being used to give people freebies rather than punishing those who don’t have high scores. People who do have a lot of influence on social media should be able to capitalize on it, and get some perks. However, just because someone isn’t retweeted by celebrities every day should not count against them when qualifying for jobs. Like the author said at the end, there is no correlation between something being popular and something being interesting."
posted on Mar 5, 2013, on the post You are not hired because your Klout score is not high enough.

"Thanks. The show is called Coast to Coast AM. Unfortunately it has a reputation of being all about UFO's and aliens, because that is what the host talks about during the week. However, they have an alternate host who takes weekend duty, and he has great discussions and guests who talk about geopolitics, economics, and things such as this as well."
posted on Feb 13, 2013, on the post Privacy Information on the Web

"comment"
posted on Jan 31, 2013, on the post Test post