Your browser is ancient!
Upgrade to a different browser to experience this site.

Skip to main content

Artificial Intelligence

Erosion of Trust

In this video, Merve Hickok, Data Science Ethics Lecturer in the School of Information, discusses how generative AI systems can impact employee and stakeholder trust when deployed without guardrails.

Transcript

At a meta level, the most important question is about trust. Generative AI systems not developed or deployed responsibly may result in erosion of trust. When systems are deployed without guardrails and for commercial priorities and incentives, this can have long-term impacts. When these systems are deployed without clear descriptions and explanations of their capabilities and limitations, users may get frustrated with the inaccuracies. Some of these inaccuracies or less-than-ideal safety and security measures may lead to incidents impacting one's rights, freedoms, and safety. Malicious use of defects may create financial, physical, emotional, and reputational damages.

A lot rests on the shoulders of those who are in senior positions in both business and government. Hype can attract users and drive commercial gains in the short run. However, in the long run, all these issues combined can significantly erode public confidence and trust. Trust needs to be justified. It is earned when systems add value and perform as expected and intended. Trust drives adoption; more adoption drives more investment. More investment in business, government, and academia drives more innovation and access to these technologies. A short-sighted approach undermines the benefits and possibilities of these systems.

At a societal level, the effects of erosion of trust are a dangerous combination. Imagine a scenario when people do not trust what they see, what they hear, or what they read. Ask yourself if you have already started questioning the authenticity of what you encounter daily, and what that means for the fabric of society.