Recently, in a “fireside chat” held as part of the College’s Ethics Week, Baruch law professors Yafit Lev-Aretz and Nizan Packin engaged attendees in a lively and wide-ranging discussion of the ethical challenges and legal implications of emerging AI technologies, particularly ChatGPT and generative AI.
Professors Packin, who joined the law department faculty in 2013, and Lev-Aretz, who arrived in 2018, have published extensively on technology policy and the legal and ethical issues related to big data, machine learning, and AI. Professor Lev-Aretz also directs the Robert Zicklin Center for Corporate Integrity’s Tech Ethics program, which spotlights emerging tech policy issues.
Here, they share some thoughts on the current state of ethics and law in the AI sphere and the importance of responsible AI in today’s society.
What effect is AI having on the corporate legal landscape?
A: Some people refer to it as the next industrial revolution, and we believe that is not an overstatement. Corporations must be mindful of various legal challenges when using AI, including privacy concerns, exposure of trade secrets, copyright infringement, defamation, and misinformation. Everyone is dealing with the same uncertainties, but over time we believe regulators and the legal community will work together to offer operationalizable guidelines.
You’ve co-authored numerous pieces on the pitfalls of AI. What areas of AI do you view as the most concerning for society?
A: We’re most concerned, as in any new technology, with the unintended consequences that often become clear only after a harm has already materialized. Against the backdrop of historical inequality and social injustices, we are troubled by the possibility of automated decision making where error and bias can systematically harm disadvantaged populations and minorities.
Another concerning type of bias, automation bias in consumer finance, was the subject of a project that Professor Packin worked on. The project, which received coverage in the Wall Street Journal, explored how people are increasingly relying on algorithms for making decisions rather than seeking input from a human expert. This deference to algorithmic results may reduce people’s creativity and critical thinking, or worse.
Any predictions on future developments in the pursuit of “responsible AI”?
A: AI, like all technologies, is mostly a neutral tool; it’s what you choose to do with the tool that matters. We predict that concerns about innovation stifling will surface any time regulators attempt to place safeguards. We are advocates of innovation, but also of asking serious questions about which innovations we would like to promote as a society and which innovations should not be allowed.
The question of responsible AI is a hard one, not only because responsibility is a normatively loaded term, but also because of the international technology arms race. Even if the United States were to place regulatory protections to limit the development and use of AI, other countries who don’t share U.S. views might—and indeed do—not. Another factor in this arms race relates to the use of AI technology in warfare. The U.S. has to keep up with its investment in AI R&D to make sure other countries don’t enjoy significant advantages that could pose a national security risk.
Can you speak to the challenges and opportunities that AI programs like ChatGPT pose in the classroom?
A: Students who are interested in learning will find that generative AI can help them in many ways, including as a personal tutor and a proofreader. But obviously generative AI is an effective promoter of plagiarism, and if we can’t determine the authenticity of a student’s work, we are unable to tell whether students are making intellectual progress. At the same time, we believe it forces us, as educators, to think creatively about those tools and how we can promote better learning.