Your browser is ancient!
Upgrade to a different browser to experience this site.

Skip to main content

Artificial Intelligence

Ethical Considerations

In this video, Barbara Hiltz, Clinical Associate Professor of Social Work, discusses the ethical landscape of generative AI and its implications within social impact organizations.

Transcript

Throughout this course, we've brought up the idea of ethics and ethical considerations because we think this is so critical. I want to delve a little deeper into the ethical landscape of generative AI and particularly its implications within social impact organizations.

As I've mentioned before, I'm a social worker and I work in the area of social impact management. Within that context, there are a few things that people tend to raise the most with me related to ethics. The first has to do with accuracy and people saying that the information coming back isn't accurate or that the model has just made things up. People call this phenomenon of AI making things up "hallucination."

Quite honestly, the concerns here are often connected to user error. So let me say a little more about this. At the beginning of this course, we mentioned that understanding how the AI was trained and how it works is really fundamental. It’s trained using lots of publicly available data to predict what's most likely to come next. That's not the same as searching the internet.

So let me use me as an example. If you were to use generative AI and ask it to tell you something about Barb Hilts, what you get back will be completely made up or it'll just tell you that it can't do it. This is because there's just not that much information out there about me, certainly not enough to train the model in any kind of reliable way. So in cases like this, generative AI is not your solution; Google is probably your solution. I will note that AI models are getting better at this as they evolve, so this is something that we might want to keep an eye on.

I'll also add here that it is really important to fact-check your results. I say this to students all the time who maybe want to use generative AI in my classroom. I’m actually completely okay with people using generative AI to write an outline or even to do an edit of the work, but you are responsible for your work, so you always have to double-check it.

Perhaps related, there are concerns raised often about bias. Certainly, generative AI is not devoid of bias. When we ground ourselves once again in our understanding of how these models are trained, this shouldn't surprise us. If what’s going in—the input of data—is biased or one-sided because it reflects the biases that exist in our societies, then the output from the AI will also reflect those biases. Imagine a mirror reflecting back society’s prejudices. Generative AI, which is trained on publicly available data, may amplify existing biases affecting decisions in areas like hiring or criminal justice or in a lot of the decision-making areas we face every day.

There was a great podcast episode in The Daily by The New York Times where they were talking about whether AI should be guided by social values, and if so, whose values should those be. In this podcast, they interviewed Kevin Ruse, who actually has another great tech podcast called Hard Fork. In it, he talks about how there are a number of ways that companies try to train models to handle bias.

One way is to change the way the model is trained. So remember, they’re trained on information that we give them. In this way, we could give it more diverse data to look at and train from. We could also do what’s called reinforcement learning by offering human feedback. So here, we might take a model that’s been trained and hire people to test it and rate its responses. Then those ratings are fed back into the system to further train it.

These are strategies to train the model before it’s in use. There are also things that we can do after that initial training has happened. We could, for example, ask the model not to be offensive. So we could tell it not to stereotype based on things like gender or race. The model has already been trained, but we’re now going to give it some added rules and try to make it adhere to those rules.

Finally, we could work behind the scenes to actually change the requests for information. This is sometimes called prompt transformation, where you might ask the model something with a prompt, but then behind the scenes, the model adds other things to your prompt to try to give it more context and detail that might help reduce bias. So maybe you ask a pretty poor prompt, but behind the scenes, the system transforms and edits the prompt to try to add some clarity and, in this context, perhaps to try to reduce the impact of the bias that was infusing the model training.

So, I mentioned some great things that companies are doing to try to address the issue of bias. But when we think about this, and when we think about accuracy at an organizational or a user level, one of the most important things we can really try to foreground here is the idea of having people involved. Sometimes folks call this "human in the loop." A framework that I really like is to think about "person, AI, and then person again." So people are involved on the front end, then we utilize the AI, and then a person is involved again.

For example, anytime I use generative AI to generate content, I first tell it what I want, I give it some context, and I tell it about some of the crucial ideas that I want included. Then I ask the AI to do its thing, and then it's back to me. I will edit the content, make sure it makes sense, make sure it’s relevant to the audience, and make sure that it’s accurate. So having this human in the loop at the front end and at the later stages, especially if that human has an eye to bias, can really be critically important.