Deepfakes are a relatively new development that has improved drastically, starting around 2018 and getting progressively better (Baker & Capestany, 2018). Deepfakes, in general, are fake but realistic videos that can mimic both real and non-existent people, especially in its uncanny resemblance to realistic motions and voices. There are many interesting and funny videos on one hand, but there are other videos that show how scary this technology can be. For one thing, deepfakes can allow anyone to imitate famous people or politicians. In addition, this can have far-reaching consequences, especially in terms of credibility as well as the ability to figure out what is truly occurring.
Besides this, the damage to reputation and other harm due to people believing in these images or videos are another issue. However, there has been some work done, such as with the U.S. Defense Department’s forensic tool to detect AI and fake news (Knight, 2018). This seems to be leading to a security-related arms race, as the AI and cybersecurity fields are rapidly developing recently, and their applications to not just industry but to defense and security are becoming explored.
References:
Baker, H., & Capestany, C. (2018, September 27). It’s Getting Harder to Spot a Deep Fake Video. Retrieved September 13, 2020, from https://www.youtube.com/watch?v=gLoI9hAX9dw
Knight, W. (2018, August 7). The Defense Department has produced the first tools for catching deepfakes. Retrieved September 13, 2020, from http://technologyreview.com/2018/08/07/66640/the-defense-department-has-produced-the-first-tools-for-catching-deepfakes/
Yes, the question now is whether the detection technology can keep up with the technology that actually produces the fake? This will be a problem for governments and also for Facebook, Instagram, TikTok, etc. It may be harmless in some cases, but it could cause actually fighting between countries depending on who is being faked saying what!
–Professor Wallerstein