Why Governance
In this video, Merve Hickok, Data Science Ethics Lecturer in the School of Information, explores the role of governance in responsible AI use.
Transcript
Generative AI is expected to bring a variety of benefits to businesses. In a Deloitte Consulting survey of more than 2,000 directors to C-suite level respondents across six industries and 16 countries, these benefits range from increasing efficiency and productivity to reducing overall costs and enhancing relationships with clients. Whether a business can realize its expectations will be dependent on how much the business is prepared for changes.
An organization's preparedness may be assessed by different factors such as its data strategy, technology infrastructure, business strategy, governance, and human capital. The same survey of senior management shows that organizations feel more prepared in terms of their technology stack. However, the main enabler or obstacle of any major development will be the human factor. Whether it is the talent needed to develop generative AI, the talent necessary to execute strategy, or to manage risk or governance of these systems, investment in human capital is a necessity for any business considering developing or deploying generative AI.
Every organization will go through a journey of exploration, analysis, and preparation before deploying its generative AI use case. Some of these preparations will involve understanding the regulatory and compliance requirements and the landscape, establishing a corresponding internal governance framework and roles, and training those who will be involved in the process. Organizations will move through higher levels of responsible AI maturity as the enterprise adopts and establishes best practices, and where these practices become standard operating procedures. A target maturity level, in other words, where the organization aims to be for the optimal level, depends on its mission, practices, and objectives.
Every enterprise starts from an exploration level. What is important is to keep an eye on the strategy and the final goals and routinely assess the gaps between the current situation and the target. In a responsible AI mature organization, the responsible use governance and enforcement practices are routinely employed. They become part of doing business and are improved as new data or trends emerge.
But what is governance? For some, governance may initially mean legal compliance. However, governance actually encompasses an organization's own rulemaking, implementation, and enforcement mechanisms to guide its functions. Governance is a collection of mechanisms, roles, and responsibilities—rules, processes, procedures, policy guidelines, and limitations, AKA guardrails. Ideal corporate governance creates transparency about the values, the rules, the goals, and the controls. It guides the work of the organization and helps build trust with employees, customers, the community, regulators, and investors.
Specific to AI and generative AI, governance means ensuring that potential risks are minimized and benefits are maximized in the design, development, and use of generative AI. It ensures that the organization has control over how these systems impact processes and decision-making at large within the organization, and that the organization's practices are contestable and lead to accountability. With responsible AI governance in place, organizations can invest in talent, data, and infrastructure while laying out the expectations and responsibilities in all stages of the life cycle—from designing, training, development, testing, security, and deployment, as well as the monitoring of generative AI systems.
When you make a decision to develop or adapt a generative AI system, you make a commitment to continuously monitor the system's performance and improve its functions. AI or generative AI are not static systems, so from the very beginning, organizations need to think of governance and accountability mechanisms, changes in existing roles and responsibilities, internal monitoring and control methods, documentation, and the communication flow for the tasks and decisions impacted by the use of generative AI.
It is important to embed a responsible and trustworthy approach to your systems from the very start and engage with internal and external stakeholders. Understand the risks—both positive and negative—and the impact of your proposed system on these groups. Similar to predictive AI systems, generative AI necessitates strong governance for several reasons: training data can contain toxic, harmful, or illegal content; models are complex and opaque; systems can be attacked and manipulated by malicious actors; users may interact with the system in ways unforeseen or unpredicted by its developers, opening up whole new concerns; responses may be inaccurate or biased; and finally, if something unexpected happens, the impact may be significant on individuals and organizations.