In today’s society, algorithms of services and platforms is a familiar and ubiquitous topic. These type of algorithms, which are uniquely used to try to find the “best” answer, not necessarily the single unequivocally correct answer, are found in many of the things we use the most in our everyday lives. For example, Instagram made a well documented (but not always well received) change from a chronological feed to one with specially curated posts, in an attempt to show users posts they’d be most interested in first. Home devices, such as the Google Home and the Amazon Echo “Alexa,” use algorithms to gather information and best answer users’ inquiries. And perhaps most notably, Facebook has faced serious questions about whether it has allowed user’s information to be used to feed them specifically targeted posts to influence the 2016 Presidential Election. Though the details make the situation slightly different, this is very similar to the election influence that Tufecki suggested could happen using Facebook’s algorithm in 2015.
In the context of Tufecki’s article, “algorithmic gatekeeping” essentially means programming algorithms to decide what content a user sees. This is typically a relatively complex solution to show them content that they have shown interest in, content they might engage with, or content that the algorithm’s designer wants them to see. “Agency,” meanwhile, is a somewhat philosophical term to describe the capacity to make “decisions” that these algorithms are given. These concepts are relevant to our work as public writers because of their grasp of the target audience.
So much of our goal in writing for the public is to appeal to our audience. Essentially, this is what algorithms can accomplish – except instead of creating just one piece of content and trying to make it appeal to as much of the audience as possible, the way a “human gatekeeper,” if you will, might have to, they can target specific individuals with content based on their interests.
For my campaign, this could be used to try to determine what kinds of infrastructure users might be most interested in. For example, in parts of Pittsburgh, as well as a few other notable areas, drinking water issues are particularly relevant to residents’ lives. These users could be delivered a drinking-water related infrastructure post to draw them in to the overall cause. For users aged 18 or under, perhaps public school repairs would be the most interesting infrastructure topic. The list of possibilities goes on and on, which is perfectly fine for an algorithm. Going through copious amounts of information, clauses, and conditions is their specialty; tasks that would take a human gatekeeper hours if not days.
There is a fine line to be walked, here, however. How moral is it to use user’s information to, in effect, sell them something (whether literally selling in a monetary sense, or figuratively selling them on a cause)? Can companies be trusted to use this type of information responsibly? These are some of the dilemmas that corporations like Facebook, Google, Amazon, and more are facing today. Perhaps more importantly, they are the dilemmas that we may someday face in trying to best write for the public.
5 thoughts on “Algorithms and Their Impact”
I think one of the biggest issues with the ethics is the consumer’s ability to choose. You ask about the morality of selling people something based on their information, but isnt that all advertising is? Even without the use of algorithms. If you think about magazines for example. Maybe you’ll see an ad for golf balls or cologne in a men’s health magazine but to find one of these in Cosmo or 17 again would be absurd. You advertise intentionally based on consumers “information” or the general habits of consumers. Young women simply aren’t interested in golf balls. However I think the real ethical issue is the personalization of it and the fact that consumers have no ability to avoid or combat the effects. If I don’t like ads for a certain makeup brand or a particular perfume, then a magazine which regularly advertises these products can be easily avoided. Yet, with the issue of algorithms, consumers cant identify nor make a conscious choice about encountering particular content. Like the article says, with print or even film media such as the news, everyone picks up the same copy. Yet with social media, by influencing people on individual level, the transparency is gone and manipulation seems the best word to describe the outcome.
I really enjoyed reading your blog post– I thought it was one of the best ones I have read so far actually. The first and second paragraph do an exceptional job of summarizing the article, and giving an introduction to the discussion. It was very clearly stated, and the ideas were fluidly developed. This is what I would consider very effective public writing.
I find the idea of algorithms very interesting, particularly in today’s technological and political climate. Algorithms are a key component for advertising and maximizing the right information for a targeted audience. The fact that algorithms can be used to possibly tamper with elections goes to show the extent to which they have developed in this world.
For my campaign, using an algorithm would be particularly useful. My campaign is focused on lowering tuition rates for students; so information regarding students’ backgrounds and economic status would be very useful in targeting an audience that is especially drawn to the topic. I would say though in general that my campaign is one that does not necessarily need to RELY on this because most students struggle to pay for college, and on a college campus, a huge number of students will already be reached. If we were to expand this campaign I would say that an algorithm would be helpful because not all families have college aged students, and some people do not have children at all. One point I do have to make when addressing your questions on the morality of using algorithms– especially with online servers– is that people who use these servers have a choice on whether to use it or not, and they are fully aware that their information is not necessarily private. It is technically up to the user if they are willing to have their information shared, and if they’re not, they should not use the site. These are private companies, not government affiliated so it is up to the people to comply or not with these restrictions.
I thoroughly enjoyed reading your blog post. It certainly brings up some interesting points about algorithms. I liked that you brought up that ultimately, a primary goal when writing for the public is to appeal to your audience. Like you said, algorithms basically master this. They can target each individual and tailor what they see to their exact tastes and preferences. While this sounds great and all, at times I certainly find it border line violating.
Specifically, I was somewhat freaked out when reading the section of Tufecki’s document where he discusses that through the use of algorithms, Facebook has the power to induce mood swings. These mood swings can then be used to make the user more likely to buy an advertised product. I can literally think of multiple instances where this has happened to me, and until now, I never thought much of it. Far too often I will log into Facebook with intentions to just reply to a message quickly but before I know it I’m on Urban Outfitter’s website literally about to enter my credit card information to buy a cardigan. This all happens because of advertisements that pop up all over my Facebook page luring me in to making a purchase. But hey, I guess it happens to the best of us, in the article Tufecki even says that a recent study showed that “at an elite university, sixty-two percent of under- graduates were not aware that Facebook curated users’ News Feeds by algorithm.”
All I have to say is that I think targeting people through their vulnerabilities seems a little unethical to me. You bring this point up in your closing paragraph. When you ask if companies can be trusted to use information responsibly I think you can refer to the discussion of the Ferguson protests. Facebook preventing stories about the Ferguson protests to be shown on users feeds does not seem like responsible use to me.
I think you pose some great questions at the end of your post that are far from easy to answer as well as thinking of great examples of affects to your campaign. But as writers, especially for the public, should we take these algorithmic capabilities into account when composing pieces? My thought is that my writing could change rather significantly knowing that it would have a higher likelihood of reaching someone very interested in the topic. For example, if you knew someone cared about infrastructure quality, you might spend less time convincing a reader of an issue and more time presenting realistic solutions to the problem.
I understand that you can’t remove persuasion entirely from the article because you still can’t assume that your entire audience agrees that infrastructure reform is needed. However, if you use too much persuasion in the beginning of the article, people dedicated to the topic and are well informed in the issue could turn uninterested pretty quickly. Given the trend of social media and algorithms being used within such platforms, an interesting question could arise, on how to appropriately compose articles to ensure that their work is as effective as possible. I will certainly be more skeptical of articles that appear on my feed after readings this article!
Really succinct movement through article and its relevance to public writing. One interesting aspect of things we see in social media feeds and results from a search engine is that there is just SO MUCH information. You note Instagram’s move from a chronological feed to a curated one, but is a chronological feed still possible? The sheer amount of posts for someone with even a fairly small network is hard to still keep up with–in other words, there will always be new posts and it will be difficult to scroll through all of them. For search engine results, organizing by date is probably not all that useful…we want “relevance”–but how do we make decisions on what is relevant?
So, I guess, my question is: if ethics are a real concern here, what are our ethics for organizing information for users to see? What are some good practices do you think?
Comments are closed.