Your browser is ancient!
Upgrade to a different browser to experience this site.

Skip to main content

ChatGPT Teach-Out

Using ChatGPT in our Lives / Lesson 5 of 5

ChatGPT Live Workshop - Q&A Submissions

10 minutes

The questions below were asked during the ChatGPT Live Workshop on April 11, 2023. We were unable to respond to all questions during the session, and presenters were eager to respond to the questions for all Teach-Out learners. The questions below are in no particular order.

Question 1: How can we trust that the information it produces is accurate? Couldn’t it be abused by the designer for mental challenge & entertainment?

William Black: My rule of thumb is to *not* trust it. While it does an excellent job at drafting a formal email, it does catastrophically poorly at drafting a bibliography or producing accurate citations (at the moment). It absolutely can be abused, like any tool, and should be used with caution.

Question 2: How can ChatGPT be helpful regarding quantitative data analysis? if it is, can it also read and describe tables of data analysis?

Kentaro Toyama: ChatGPT is perhaps weakest with tasks that require complex logic, and just as William showed, it often gets math questions wrong. That's a deeper issue with the way that neural networks or deep learning (the class of technologies underlying ChatGPT). So, at present, I'd say ChatGPT would not be great for quant data analysis. But, arguably, quantitative data analysis is easier to do than generating human-like language, so I expect those tasks will be better and better done by AI.

Question 3: Will ChatGPT or other AI tools replace developers in the upcoming 5-10 years? No one knew that 5 years back that there would be ChatGPT.

Kentaro Toyama: “Replace” is a tricky word, because it can mean two different things: (1) AI can do everything a good developer can do, replace all of their work; or (2) AI can assist some developers in becoming sufficiently more productive that employers don’t hire as many developers. I don’t believe (1) will happen in 5-10 years; but, I think (2) will begin to happen – in fact, it is probably already happening in some industries, such as customer service, where many consumers must first get through a chatbot or automated IVR system before they can speak to a human agent. This is also similar to the way that robots have replaced human labor in manufacturing.

Question 4: It was just listed that ChatGPT has the tendency to hallucinate. Hallucinations and hallucinating, up to now perhaps, have been a distinctly human condition. Is using the term in conjunction with ChatGPT anthropomorphizing the technology? I think it is and that it seems dangerous.

Kentaro Toyama: Human beings appear to have an innate tendency to anthropomorphize (hence, children treating stuffed animals as live entities). Computer scientists are human, too, and even as we know that the systems we build aren’t actually sentient beings, we tend to use words that intend an analogy or metaphor with human capabilities and behaviors. “Hallucinate” is one of many instances of that. The danger, as you point out, is when the general public interprets these terms literally. In fact, even computer scientists might not fully be able to think themselves out of the illusion of real sentience, as former Google employee Blake Lemoine appears to have been unable to do.

Question 5: Can ChatGPT digest information incorrectly or be fed biased or inaccurate data that could twist its logic and reasoning resulting in it becoming unreliable like Wikipedia?

Kentaro Toyama: Yes, and it is almost certainly already doing this. It’s widely understood that the class of machine learning algorithms that ChatGPT and other contemporary AI are based on are only as good as their data. If the data is biased or wrong (as is the case with much of the data that ChatGPT is trained on, based as it was on large corpora of largely unscreened text from the internet), then what it learns will be similarly biased or wrong.  In addition, another aspect of these algorithms is that the specifics of what they’re learning and generating are often “uninterpretable” to people – even the creators of the systems don’t completely understand why ChatGPT responds as it does. (You could argue that unpredictability is a feature of intelligence, not a bug.) So, even beyond inaccuracies and bias in the data, there can be other ways in which ChatGPT gets things wrong. (One example is “hallucinations,” discussed elsewhere here.)

Question 6: Do you think there would be value in studying the quality of multiple choice and discussion questions created by ChatGPT vs. ones created by faculty familiar with the subject matter?

Kentaro Toyama: As a general rule, I think it’s important for us to understand the specifics of what systems like ChatGPT are, compared with expert capability. At the same time, it will evolve very quickly. I expect that for some classes of questions, ChatGPT is already as good as many experts; for others, I’d guess it will be just as good within years. There may be some hold-out cases, e.g., where the human expertise is rare and not well-documented; where the questions require logical deduction of a kind that ChatGPT is known to be weak at; etc.

Question 7: Is  ChatGPT capable to create a parallel information corpus that can become another standard reference?

Kentaro Toyama : ChatGPT can definitely create corpora of text easily. The question is, what happens if such corpora are used as part or all of the training data for other AI systems. My guess is that for the most part, such corpora will NOT go too far beyond the original (human) data, but that if this process is repeated over and over, AI systems will begin to develop their own unique subcultures of language, in a way that might be similar to how human language evolves over time. I heard my colleague at the University of Michigan, Rada Mihalcea, say that she felt that the internet will soon be overrun by content generated by AI. So, we all needed to rush to capture data that was actually human-generated.

Question 8: What steps are being taken to ensure that large language models like ChatGPT are being developed and used ethically, particularly in industries such as marketing and advertising where they could potentially be used to manipulate consumer behavior?

Kentaro Toyama: This is a very important question. AI is like the nuclear power of our age; if we don’t regulate it well, chaos, death, and destruction are a very real possibility. (A quick Google search reveals that I’m far from the only one who believes this.) While many people and organizations are working on this issue both within and adjacent to the technology industry, I haven’t yet seen significant traction among policymakers.  One of the key challenges is that regulation tends to be seen as a rein on innovation, and at least in the United States, there is a great fear that such reins will impede the country’s ability to keep up with rival nations. I suspect that some significant crisis or disaster is required before global society begins to regulate AI seriously.

Question 9: Can you talk about ethical considerations for authorship/other when using ChatGPT for writing academic papers for both students and faculty?

Kentaro Toyama: Especially within academia, but also in the world at large, I think we need to develop both laws and ethical norms for the appropriate use and citation when using technologies like ChatGPT. In fact, I’d argue that we should immediately adopt a norm whereby any content generated wholly by, partly by, or with the assistance of AI needs to be explicitly and prominently indicated as such. (I acknowledge there’s a question about what counts as “assistance of AI.”)  These could be analogous to norms around citation (i.e., scholarly attribution), but probably with a finer-grained system of how things should be attributed. As readers, viewers, and audiences – we all deserve to know exactly what portion of creative work was generated by computer.

Question 10: My take is that ChatGPT answers the questions (when you ask for a solution it does not teach how to do it) and the second is that it is the first version, far from perfect. Are those good takeaways?

Kentaro Toyama: What ChatGPT and other AI technologies based on modern neural networks / deep learning/machine learning are good at is returning responses that seem like those produced by people. That ability – to produce human-like responses – has eluded computer scientists for decades, so it is a very real and incredible advance. But, human responses can themselves be flawed, and current systems have still not incorporated logical deduction to the extent they eventually could; so it’s best to remain skeptical if the question is the accuracy of responses.

Question 11: What are the legal ramifications of a ChatGPT user having AI create something such as a book, entirely through the program, and publishing it for the user’s profit? Who owns the content/product/profit?

Kentaro Toyama: A lot of that is still to be figured out by legislators and the courts. In the United States, there was an interesting recent case, where someone had written a comic book with all the graphics generated by Midjourney, an AI system. The question was whether it was copyrightable. The court ruled that the images could not be copyrighted, because copyright was meant for human creations. But, the author still had the copyright to the story (which they thought up), as well as the structure of the comics -- i.e., how the images fit together on the page.

Question 12: Can you talk about the issue of plagiarism in regard to ChatGPT?

William Black: I think first off, if used, its place is in the acknowledgments (not as co-author), as if it were Python or Numpy: a tool that you used. Secondly, so long as it's not directly copying something, it's just doing what we do: assimilate information we've viewed over our lives and using those to create new content. So generally speaking, I don't think it's an issue, so long as it's cited as the helper it is.

Question 13: What are the implications that you foresee for the "white collar" jobs, managing jobs, and even leadership positions?

Kentaro Toyama: Basically, any job that could be done entirely through the creation and processing of information (e.g., writing of various kinds, coding, illustration, music composition) is prone to significant disruption through ChatGPT and related technologies. This doesn’t mean that all aspects of any one job will be done entirely by computer, but that the need for white-collar human labor will decrease. (See my response above to a question about replacing workers – search for “replace” in this document.)

What’s unclear is what aspects of jobs people are willing to have done by computer. For example, will the radiological diagnosis of medical conditions be replaced by computers? That depends on whether most people would be comfortable having diagnoses done without the significant involvement of a trained human physician. We might be OK with it, but we also might not be – until it’s tried at a large scale, it’s hard to know for sure, and so far medicine hasn’t gone in that direction, even though the technology for much radiological image analysis has gotten better and better. On the other hand, in the early days of eCommerce, there were a couple of years when it wasn’t clear if most customers would be willing to part with their credit card numbers online, given the potential for fraud. But, within a couple of years, consumers got over that fear, and now we’re all happily doing business with Jeff Bezos.

Question 14: What will prevent the huge amount of data generated from social media and other data platforms from getting on the radar of ChatGPT and the like?

Kentaro Toyama: First, ChatGPT and related technologies have all already been trained on a lot of public social media data, though the specifics are murky. As to future social media data, I only see two key ways by which to limit how much social media data is used to train ChatGPT and other systems: (1) regulation by governments – for example, Europe’s GDPR already likely inhibits some publicly viewable social media content from being used for some forms of AI training (though violations of those rules are probably difficult to detect); (2) norms set by the tech sector at large – for example, the group behind the Common Crawl, a large corpus of text from the internet (used also by ChatGPT), explicitly avoids webpages that ask not to be crawled. Influential groups like that might case other technologists to avoid some data.  One question is… Why should we care? At some level, I’m more comfortable having AI systems use my social media data as part of a larger aggregated set of training data, than I am of having companies buy and sell data that is specifically about me. The former is being used primarily to help AI systems learn what human communication is like in general; the latter is being used to extract attention and money from me. Of course, if AI systems are being used to develop models specific to me, that would also be problematic.

Question 15: What new jobs will be created by the development of AI?

William Black: Check out this  LinkedIn post  for 100 examples.

Question 16: Do you foresee AI technology lowering the barriers to entry for jobs in data science and data engineering? A job with a lower barrier to entry could then mean a lower salary?

Kentaro Toyama: I see these technologies lowering the bar to entry for some jobs, but raising them for others. The bar is lowered for jobs in which human workers provide some form of human judgment, while the AI does expert tasks previously only done by people. For example, some forms of legal contracts could be written primarily by AI, but still require a human lawyer to check for overall fit with the desired legal goal. It could be that human lawyers who do that task can be less experienced than a lawyer who needed to write the contract from scratch. That’s lowering the bar.  But, it’s also possible in the same industry, that some classes of work have the bar raised for entry. Some legal contracts might require very unique and experienced human lawyerly input to guide an AI system to draft a contract in the first place, and post-hoc checking would also require expert eyes. In those cases, the previous job of writing a rough draft, which senior lawyers might have given to junior lawyers, would now be done by AI. So, only senior lawyers would be needed. The bar has been raised.  It’s true that in general, lowering the bar reduces salaries; raising the bar increases salaries. Either way, I believe it leads to less meaningful work (or no work) for more people, while a small minority gets the best jobs using AI to accomplish a lot.

Question 17: A question to Kentaro. You outlined a concept of machines talking to machines. I can easily envision that AI written court case will be judged by an AI judge. In that case, the ruling will impact a human. Do you agree with such a vision? What is your comment?

Kentaro Toyama: I fear a world in which computers make more decisions with minimal human oversight. In some parts of our world, this is already happening, as when, for example, chatbot customer service is triaging incoming cases to decide which ones deserve human help. As it is, we hate those systems, don’t we? Your case of legal cases being argued and decided by AI is much more higher stakes, and the issues will be profound. Computer-based decision-making exacerbates the problems that Kafka and Orwell warned us about – computer decision-making will be even more opaque and rigid than human ones. Meanwhile, they might seduce the larger society through various efficiencies… though most people might be happy that things go more quickly, others will end up in an AI-based bureaucratic hell that is inescapably rigid, even when it’s wrong or heartless. At the (no-longer-science-fiction) extreme, autonomous weapons that make decisions on their own about who and when to kill are one potential outcome.  Once we turn over important decisions to computers, the other problem is that our own ability to make decisions will atrophy, and we might end up like the infantile people from WALL-E who ultimately can’t take care of themselves. (That movie offers a cute version of a scary outcome, and even it is pretty scary.)

Question 18: Would you consider AI technologies as the beginning of a new age in the informatics field?

Kentaro Toyama: AI itself has been worked on in earnest for several decades, but in the last ten years, we have seen advances that have turned a corner. I think people like Bill Gates, who say that this generation of AI is ushering in a new era of technology are right.  It’s not too crazy to wonder when the Singularity will be – the moment when AI becomes equal to human intelligence in all ways, and from that point on, is able to make itself smarter and smarter and smarter.

Question 19: As an educator, how can we identify students who have used ChatGPT in answering assignments, I mean how can we control them?

William Black: Though you can perhaps never detect it perfectly, I think one-on-one discussions with students would be most effective.

Question 20: Can ChatGPT replace programmers in the future?

Kentaro Toyama: Some, possibly most, but not all. Just as in manufacturing, where there is still a need for some human engineers to look after the robots, and also a demand for handmade goods, there will likely be ongoing demand for some human programming.  However, in the longer term, I do think all work that is basically about thinking and information processing (including programming) will be better done by AI than by even the best people. Just like with chess.

Question 21: Does ChatGPT install anything on your computer when you use it?

William Black: No; this can be used completely online.

Question 22: I recently saw that chatGPT was successfully installed on a 1987 computer. What would be the chances if this could be achieved in that time, and how would it change the present? Can you give us your opinion?

Kentaro Toyama: Well, running ChatGPT on a 1987 computer is different from training ChatGPT on a 1987 computer – the latter might still not be possible in a timeframe that makes sense. It might have taken 40 years – i.e., until now – for a 1987 computer system to train ChatGPT. One of the interesting things about current AI is that the core idea of neural nets and how to train them have been around since at least the 1950s. Though we have also made many advances in algorithmic understanding that are undoubtedly important, what has really enabled ChatGPT is increases in computing speed and the immense amount of human-generated public data (i.e., the internet).  So, I’m not convinced that we could have had ChatGPT in 1987 if we were limited to the hardware and digitized data sources we had then.

Question 23: Will or how will AI platforms such as ChatGPT collect and classify and archive each user's data to provide more targeted and intelligent help?

Kentaro Toyama: It seems very likely that there will be tailorable versions of ChatGPT, which can drink in all of our own personal data (e.g., old emals, text messages, Facebook posts), and then incorporate that knowledge into its answers.

William Black: It remembers past conversations; this turns out to be quite useful by prepping it to answer in a certain way. (e.g.: "answer this question keeping in mind ...")

Question 24: Do you foresee in the future a professor busy reading ChatGPT-generated work submitted by students? Can you easily tell the difference? How would you prevent your student from making you as a human busy reading machine-generated articles?

William Black: Have ChatGPT read the ChatGPT-generated work! haha But seriously, I wouldn't want to ask questions that could be wholly answered by ChatGPT. So much of this, I believe, will be in changing the types of questions we ask / the types of assignments we give students.

Previous