Your browser is ancient!
Upgrade to a different browser to experience this site.

Skip to main content

Artificial Intelligence

A Lens to Take With Us

In this video, Elle O'Brien, a Lecturer in the School of Information, provides perspective on how to approach conversations about AI, how to think critically about work involving AI, and how we can ensure that AI tools are used responsibly.

Transcript

So now that we've learned a little bit about AI terminology and seen some examples of what learning can look like for these systems, I want to zoom out a little bit and come together with a lens, a sort of perspective that we can take with us through the next few lessons.

So what's the problem with this depiction of AI, this picture that we were critiquing earlier? We identified quite a few problems with it, but I think one of the bigger meta problems is that the more we think of AI as a brain, the more we tend to slide into assuming that it can do thinking for us. If it's really a brain, if it's really a person, then maybe I can trust it with a lot. Maybe it's better than me at my job. Maybe this is the thing that should be in charge. That tendency gets us thinking about shifting work and responsibility.

The reason this can lead us astray is that real AI systems all require work. Building an AI system requires several kinds of work, all of which has to be done by people. Ideally, we have to answer questions like: Why do we want to do this? What is the goal of creating this system? Who is it for? Who is supposed to use this? What are their motivations? Where are we going to get the data from? Remember that if AI systems are increasingly using this data-driven paradigm—and that's what we're working from a lot of the time—it's very data hungry. It takes a lot of data and a lot of computing often to make it work. And of course, how do we even build the system? We have to orchestrate things like data, computers, and understanding the system. Our human developers have to build it. There are a lot of moving parts here.

In my experience, real AI systems are simply never trivial to build. There is always complexity. Of course, using an AI system also requires quite a bit of work. If you are deciding whether to implement some kind of AI system, you have to answer questions like: Is this better than what we had before? What kind of mistakes does it make, and are those tolerable to me? What can happen if it fails? Because using AI often involves automating things—if we're using it to take over some part of our work that maybe we don't want to do ourselves—we have to understand what kind of mistakes can be made and what the consequences of it failing might be. It might not fail in the same ways that we're used to.

Real AI systems also require responsibility. Something I heard recently, from meeting with people who research software engineers, really stuck with me. Engineering is about responsibility. When we engineer something, like a bridge, we build a structure for lots of people to use. We're building this bridge that people are going to rely on every day for their commute. The work of the engineer is not just to design the bridge; it is also to take responsibility that the bridge is constructed up to the standards required for safe use. The role of engineers is not just to make something; it is also to ensure it is constructed according to safety standards so that people can rely on it every day to get to work or get home. There are millions of people every day who put their trust in engineering and engineers as professionals.

When we build AI systems, we still have to take responsibility for them. An example here is a pacemaker. Many pacemakers have AI systems on board for detecting when something is going wrong or when the rhythm needs to be changed. Some devices implanted in people even administer defibrillator shocks. These are things that really affect people's lives. There are lives on the line. When we build AI systems, we can't defer the responsibility entirely to the AI. We really still have to own all the parts of the things that we're making. It is not an understatement to say that lives depend on this. AI systems can sometimes be useful; they can automate things, but the system as a whole—people's lives depend on that. We still have to take responsibility for all the things that we make with AI components in them.

When we talk about AI, we still need to do a lot of thinking. It’s not a get-out-of-thinking-free card. We need to take responsibility for what we build. If we're thinking about the cool blue brain that leads us into a place of thinking maybe we don't have to do this work, maybe we can just let the AI handle things, that’s not how AI systems work today. We are still in the driver's seat, and it’s still up to us at every step of the way to take responsibility for the things that we're making from start to finish.