Your browser is ancient!
Upgrade to a different browser to experience this site.

Skip to main content

Artificial Intelligence

AI and Legal Considerations

Charles Garrett

University of Michigan

In this video, Professor Charles Garrett delves into the evolving legal and ethical challenges surrounding generative AI, including copyright, ownership, and regulatory frameworks. You'll learn about current legal disputes, emerging global regulations like the EU Artificial Intelligence Act, and resources to stay informed on the rapidly changing legal landscape of AI in creative fields.

Excerpt From

Transcript

As AI gains greater traction, policymakers and legislators around the world are grappling with how to regulate it to minimize possible harms without sacrificing potential benefits. Since different governments are taking different approaches, it's important to follow the news as it affects where you live and work. Many legal issues remain unsettled when it comes to AI in creativity, and big questions surrounding intellectual property, copyright, authorship, licensing rights, and renumeration still need to be ironed out.

Animated debates within US legal circles revolve around AI ownership. Works created solely by generative AI are not currently protected by US copyright law, even if the AI entity is trained on human sources and produced in response to human prompts, because the term author is not extended to machines or GenAI programs. Creative work produced through collaboration between a human and an AI program also has inspired legal disputes. As a result, various stakeholders impacted by AI are involved in ongoing lawsuits, grievances, and class action suits.

Legal cases have been brought by large enterprises as well as individuals to determine what data can be used to train GenAI models, and who owns and controls AI-generated content. Getty Images, for instance, has sued the company that developed Stable Diffusion for using Getty's image database without permission. The New York Times has sued OpenAI and Microsoft for allegedly using millions of Times' newspaper articles to train their GenAI software. Class action suits have been launched by musicians, visual artists, writers, and actors to protect their work from being used to train AI models without authorization and/or compensation. Lawsuits have even been brought by individuals who believe GenAI companies have unfairly invaded their privacy or unlawfully damaged their reputations.

Because many cases remain unsettled, the AI industry has started to become more proactive in addressing industry demands for legally cleared data training sets, stronger licensing policies, legal indemnification mechanisms, and fair royalty schemes that reward those parties whose work is used to train GenAI models. Far more attention to date has been paid to sorting out how existing laws in the US apply to AI technologies, rather than to enacting new AI regulations. But various competing AI initiatives, laws, and policies are now advancing at the federal, state, and municipal levels, including anti-trust investigations launched by the federal government into the roles played by leading AI tech companies. At the moment in the United States, there is no universal legal framework, which means that users of GenAI in Los Angeles may need to abide by different rules than AI users based in New York.

No matter their location, anyone who seeks to use GenAI as part of a creative practice needs to stay informed about the AI legal landscape, especially when it comes to issues of responsible use, copyright, credit, and compensation. Creative workers in Europe have received more guidance than their US counterparts. In March 2024, the European Parliament formally adopted the EU Artificial Intelligence Act, a landmark piece of legislation governing AI. The AI Act, as it is commonly known, establishes a shared legal framework for AI within the European Union, seeks to regulate the providers of AI systems, and also assigns individual applications of AI to risk categories, unacceptable, high, limited or minimal, that are subject to legal regulations. The AI Act has many adherents, so it's possible that some of its contours eventually will shape AI law in the United States.

Every year the Stanford University Institute for Human-Centered Artificial Intelligence produces an AI index, a comprehensive report, that includes detailed coverage of AI policy and governance, including a timeline of significant AI policymaking events, as well as updates on global AI legislative efforts. Reading through the latest annual Stanford AI Index is a great way to get up to speed with AI legal developments in the United States and beyond. Keeping up with recent AI legal developments can be challenging because of the sheer volume of cases, but a variety of public and private resources are available to help.

The EthicalTech initiative, an interdisciplinary partnership hosted by George Washington University maintains a comprehensive AI litigation database that tracks ongoing and completed litigation involving AI. Various public organizations, professional associations, and tech-friendly private law firms also maintain specialized litigation trackers related to copyright protection, data privacy, liability, and other pressing AI issues. Since you've enrolled in this course, you likely know about various online courses, workshops, and certifications that can help you learn all about AI. Websites dedicated AI, including Ben's Bites, and The Rundown AI, and tech-friendly news outlets like TechCrunch, The Wall Street Journal, and WIRED, also can help you keep track of the latest AI legal developments.