Societal Impact of Generative AI
In this video, Merve Hickok, Data Science Ethics Lecturer in the School of Information, points out potential risks that generative AI systems might pose if not responsibly designed and governed.
Transcript
For many years, researchers, businesses, civil society advocates, and policy makers have discussed the variety of societal and environmental impact of predictive AI systems. These are sociotechnical systems. If not designed, procured, and governed in a responsible way, AI systems can amplify stereotypes and extreme worldviews. Can result in discriminatory outcomes, limitation or denial of access to resources, or opportunities, or infringement of rights and freedoms. Opaque systems, untraceable or unjustifiable results, can foreclose contestation. Generative AI systems can also embody these known challenges with AI systems. Generative AI systems also introduce risks, which need to be researched, mitigated, and managed, such as misinformation, disinformation, corporate infringements, widening of digital divides, and deep fakes. The ease of use for generative AI, and the ability to interact with the systems in natural language, brings many opportunities. But also lowers the threshold and cost for malicious actors to conduct cyber attacks, deep fakes, or fraud schemes. We do not need to accept the current state. We can, and we should contribute towards useful and trustworthy innovation. If we want to build a positive future, we also need to think of societal impacts of generative AI, where social and economic benefits of generative AI are more fairly distributed. Where more cultures, languages, and identities are represented. Where environmental impact of technologies are mitigated. It is natural that an emerging technology and associated products will have some issues to be fixed. It is natural that these technologies have certain limitations. What we need now is to acknowledge these issues, being transparent about the capabilities and limitations, and put safeguards in place to prevent foreseeable harms. To foster responsible and trustworthy generative AI, which results in positive societal impact, we need to build public awareness, and have inclusive debates about what we want our future to look like.