In today’s society, algorithms of services and platforms is a familiar and ubiquitous topic. These type of algorithms, which are uniquely used to try to find the “best” answer, not necessarily the single unequivocally correct answer, are found in many of the things we use the most in our everyday lives. For example, Instagram made a well documented (but not always well received) change from a chronological feed to one with specially curated posts, in an attempt to show users posts they’d be most interested in first. Home devices, such as the Google Home and the Amazon Echo “Alexa,” use algorithms to gather information and best answer users’ inquiries. And perhaps most notably, Facebook has faced serious questions about whether it has allowed user’s information to be used to feed them specifically targeted posts to influence the 2016 Presidential Election. Though the details make the situation slightly different, this is very similar to the election influence that Tufecki suggested could happen using Facebook’s algorithm in 2015.
In the context of Tufecki’s article, “algorithmic gatekeeping” essentially means programming algorithms to decide what content a user sees. This is typically a relatively complex solution to show them content that they have shown interest in, content they might engage with, or content that the algorithm’s designer wants them to see. “Agency,” meanwhile, is a somewhat philosophical term to describe the capacity to make “decisions” that these algorithms are given. These concepts are relevant to our work as public writers because of their grasp of the target audience.
So much of our goal in writing for the public is to appeal to our audience. Essentially, this is what algorithms can accomplish – except instead of creating just one piece of content and trying to make it appeal to as much of the audience as possible, the way a “human gatekeeper,” if you will, might have to, they can target specific individuals with content based on their interests.
For my campaign, this could be used to try to determine what kinds of infrastructure users might be most interested in. For example, in parts of Pittsburgh, as well as a few other notable areas, drinking water issues are particularly relevant to residents’ lives. These users could be delivered a drinking-water related infrastructure post to draw them in to the overall cause. For users aged 18 or under, perhaps public school repairs would be the most interesting infrastructure topic. The list of possibilities goes on and on, which is perfectly fine for an algorithm. Going through copious amounts of information, clauses, and conditions is their specialty; tasks that would take a human gatekeeper hours if not days.
There is a fine line to be walked, here, however. How moral is it to use user’s information to, in effect, sell them something (whether literally selling in a monetary sense, or figuratively selling them on a cause)? Can companies be trusted to use this type of information responsibly? These are some of the dilemmas that corporations like Facebook, Google, Amazon, and more are facing today. Perhaps more importantly, they are the dilemmas that we may someday face in trying to best write for the public.