International Security Course–Fall  2020

Ethical Restraints

Brose hand waved away ethical concerns in his article, devoting one sentence to addressing it and seemed to suggest that technological advances won’t fail to be adopted because of ethics but a “failure of imagination.” Failure of imagination is not often the problem when it comes to technological advances; failure to predict implications or harms caused occurs far more frequently. What his article did illuminate, however, are some of the types of morally questionable technology that may come out of the fourth industrial revolution.

Miller, in contrast, offered a thorough analysis of autonomy and morality in his discussion of new technologies. In the United States and Europe, these concerns are being taken at least somewhat seriously. Miller mentioned how US companies have refused to provide technology to the Department of Defense. More recently, Google, Amazon, Microsoft, and IBM have paused or eliminated contracts with law enforcement agencies following the massive protests of racial injustice and police brutality. In February, the District Court of The Hague ruled an algorithm-based program designed to identify and monitor people likely to commit benefits fraud was a violation of European treaty on human rights and privacy legislation. The Pasco County Sheriff’s Office in Florida is under scrutiny after the Tampa Bay Times published an investigative report on their “Intelligence Led Policing” initiative described by criminologists as “morally repugnant.”

The shift towards greater responsibility and the ethical use of technology is important, but as Scharre points out, treating AI as a new arms race increases incentives to launch untested technologies without thorough understanding of their risks and limits the opportunity for oversight. While there are some safeguards in liberal democracies such as civilian oversight of the military and freedom of speech and press that allows dissent and scrutiny of new programs, adversaries such as China are not similarly restrained. As liberal democracies adopt stronger privacy laws and protection for citizens, authoritarian regimes are free to test AI and other technology on their own citizens that can be deployed into the international arena. Scharre’s suggestion that the US try to work with Russia and China to develop safety protocols is a good one, but the lack of trust between these nations makes an arms race still likely. The US and its allies, assuming it still has any, should come together to hold each other accountable to shared principles of privacy and human rights and pool resources to help detect and counter adversaries’ attacks in this new frontier.

One thought on “Ethical Restraints”

  1. I liked where you were going with this post, Stephanie. There is a vast literature on the unintended implications of technology innovation, where developers failed to anticipate how a particular product or process could have seriously negative effects if not used as originally intended. And there is an equally large literature on the “technological imperative,” which basically argues that once the knowledge of how to design or build something is known, it is essentially impossible to prevent it from being built. Perhaps the two most famous (or perhaps, infamous) examples of this are (a) nuclear weapons, and (b) biological weapons.

    And to this list we may soon need to add AI and facial recognition. Neither technology is inherently “evil” or dangerous, but used for the wrong purposes by the wrong people they certainly can be. What is currently going on in China is living proof!
    –Professor Wallerstein

Comments are closed.