Responsible and Trustworthy Generative AI
In this video, Merve Hickok, Data Science Ethics Lecturer in the School of Information, discusses the concept of generative AI and the need for responsible and trustworthy generative AI practices, underscoring the importance of collaboration and ethical considerations in the adoption and utilization of this technology.
“Futuristic computer graphic of glowing human face generative ai” by vecstock is available via Freepik.
Excerpt From
Transcript
Hi, my name is Merve Hickok, I'm a faculty member at University of Michigan, teaching data science ethics at School of Information and advising on responsible AI and data at Michigan Institute of Data Science. I'm also the president of a global independent research and education nonprofit, Center for AI and Digital Policy, where we are engaged in global AI policy work and educate future AI policy leaders in more than 80 countries. So what is responsible and trustworthy generative AI? In late 2022, millions of people were introduced to generative AI products, which promised to transform industries for the better and provide innovative tools to mainstream users. Previously, AI systems were mainly predictive AI, focusing on analysis of data sets for tasks such as classification, recognition, prediction, recommendations, etc. These systems require technical skills and knowledge to engage with them. Generative AI systems, on the other hand, allow users to create new content. The content might be text, image, audio, video, or codebase, or it might be a conversion from one of these modalities to another. The content you can generate is pretty much limitless, and you only need natural language to make it work. Maybe even your grandparents are now using generative AI. It is exciting times, and we probably do not know right now many of the applications that will emerge from generative AI over time. If we as individuals, businesses, communities are to benefit from this emerging technology, we need all hands in. We need collaboration between a variety of stakeholders, and we need to promote human dignity, contribute to positive societal transformation, and protect the environment in the process. This is where responsible and trustworthy AI and generative AI comes into the picture. We all have a voice, and as AI users, investors, scientists, policymakers, and civil society, we all have a responsibility too. Responsible AI means ensuring justified trust for AI technologies. Trust increases the adoption rates, the use cases, investment, and research. We know generative AI will be transformational. Some may choose to focus on short-term economic gains and exploit data, labor, and natural resources, which would eventually doom this technology, with significant impact on our societies and businesses. Or we can proactively prepare for positive societal, labor, or educational transformations. We can invest in enhancing accessibility as well as equitable access. We can increase data and AI literacy and ensure fair distribution of economic benefits. We can incentivize work on common goods, prioritize ethical and social progress. We can work on guardrails to fight against bias, polarization, and malicious uses. Responsible AI is not just one thing, it is what you decide to prioritize and implement.