Pedagogy And The AI Guest Speaker Or What Teachers Should Know About The Eliza Effect

Published by

on

An AI robot guest speaker in front of a green chalkboard. Eliza Doolittle is to the left of the robot.

One popular AI in the classroom idea is using ChatGPT or Google Gemini as an AI guest speaker with students.1 The idea is that students ask questions the teacher types into an AI Large Language Model (LLM) chatbot, asking the chatbot to answer as a person or concept in a live classroom setting.

Suggested AI guest speakers include a historically famous person, a character from a book, someone with a specific job, someone from a different place or time, an animal, an object, or a concept such as the Water Cycle.

Outline:

Talking To The Dead

Before addressing this strategy through a pedagogical lens, let’s address using historical figures as AI guest speakers.

Students study George Washington and Thomas Jefferson. Both enslaved people. Twelve of the first eighteen presidents were enslavers. Is it appropriate to “interview” enslavers?

Setting enslavers aside, a history teacher might use Harriet Tubman, Anne Frank, Martin Luther King, Shirley Chisholm, and other historical figures as AI guest speakers.

Should AI voice deceased people from marginalized and oppressed communities? OpenAI’s board is exclusively white and male. One OpenAI board member once said men have more aptitude for science than women. The New York Times documented the industry’s prominent backers are mostly white and exclusively male.

Harriet Tubman photographed in 1985.
Please do not invite an AI Harriet Tubman into your classroom. Image source: Wikimedia Commons.

Beyond these concerns, computers voicing the dead does not sit well (for lack of a better term). The recent backlash to AI voicing George Carlin and Robin Williams demonstrates this.

George Carlin performing stand up comedy in April 2008.
What would George Carlin, who famously disdained the wealthy and powerful, think of his voice being generated by AI? Image source: Wikimedia Commons.

The Eliza Effect

Concerns about giving voice to the dead do not apply to AI guest speakers who are someone with a specific job, an animal, an object, or a concept such as the Water Cycle. But is it sound pedagogy? Let’s consider what teachers can learn about students and AI chatbots from the Eliza Effect.

The Eliza Effect is the tendency to project human characteristics onto computers that generate text. Its name comes from Eliza, a therapist chatbot computer scientist Joseph Weizenbaum created in the 1960s. Weizenbaum named the chatbot after Eliza Doolittle in Pygmalion.

Photo of Julie Andrews and Rex Harrison from My Fair Lady. Eliza Doolittle, the flower girl, meets Professor Henry Higgins.
Eliza the chatbot was named for Eliza Doolittle in Pygmalion. Image source: Wikimedia Commons.

To Weizenbaum’s horror, people who interacted with Eliza believed it was human. As a profile of Weizenbaum in The Guardian states,

Yet, as Eliza illustrated, it was surprisingly easy to trick people into feeling that a computer did know them – and into seeing that computer as human. Even in his original 1966 article, Weizenbaum had worried about the consequences of this phenomenon, warning that it might lead people to regard computers as possessing powers of “judgment” that are “deserving of credibility.” “A certain danger lurks there,” he [Weizenbaum] wrote.” Bold added by the blog post author.

Joseph Weizenbaum (Professor emeritus of computer science at MIT). Location: Balcony of his apartment in Berlin, Germany.
Teachers should consider what Joseph Weizenbaum learned from Eliza. Image Source: Wikimedia Commons.

The anthropomorphism and belief that chatbots have judgment and credibility have huge pedagogical implications.

The Guardian also quotes Colin Fraser, a data scientist at Meta, who says, “The technology is designed to trick you, to make you think you’re talking to someone who’s not actually there.”

That quote should give any teacher pause before using AI chatbots with children.

Inaccuracies and Bias

The Eliza Effect tells us that students may anthropomorphize chatbots and believe what they say. The first part is problematic, but the second is only a problem if the chatbots are inaccurate or biased.

Let’s look at some information about AI chatbot accuracy and bias:

  • “What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be,” says Robotics researcher and AI expert Rodney Brooks.
  • “We’re not in a situation where you can just trust the model output,” says Eli Collins, vice president of product management at Google DeepMind.
  • “It is important to understand that Bard [now Gemini] is not intended to be a tool that provides specific or factual answers. For that purpose, Google search is the best tool,” says Adi Mayrav Gilady, a product manager in Google’s research division.
  • Vectara, a start-up founded by former Google employees, “Estimates that even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent.” I am sure I gave students materials with errors when I taught high school Social Studies. Nothing is perfect. Having said that, if someone gave me a curricular resource and told me that 3 percent of it was inaccurate, I would not have shared it with students.
  • Disinformation researchers used ChatGPT to produce convincing text that repeated conspiracy theories and misleading narratives, according to the New York Times.
  • A Stanford study found ChatGPT and Bard (now Gemini) answer medical questions with racist, debunked theories that harm Black patients. 
  • ChatGPT was found to replicate gender bias in recommendation letters.
  • The Center for Science in the Public Interest says, “ChatGPT is amazing but beware its hallucinations.”
  • Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away, according to Fortune.
  • ChatGPT could be used for good, but like many other AI models, it’s rife with racist and discriminatory bias, according to Business Insider.

Common Sense Media Evaluation of ChatGPT and Gemini

There is evidence that ChatGPT, Gemini, and other LLM AI chatbots are inaccurate and amplify bias. 

But what does Common Sense Media, an organization that looks at edtech apps through a pedagogical lens, say?

Highlights of Common Sense Media’s October 2023 review of ChatGPT2 include:

  • “ChatGPT’s false information can shape our worldview. ChatGPT can generate or enable false information in a few ways: from “hallucinations”—an informal term used to describe the false content or claims that are often output by generative AI tools; by reproducing misinformation and disinformation; and by reinforcing unfair biases. Because OpenAI’s attempts to limit these are brittle, false information is being generated at an alarming speed.”
  • “Because ChatGPT isn’t factually accurate by design, it can and does get things wrong.”
  • In OpenAI’s own words (opens PDF), GPT-4 has a “tendency to make up facts, to double-down on incorrect information, and to perform tasks incorrectly.” 

Highlights of Common Sense Media’s November 2023 review of Bard (Remember that Bard is now Gemini) include:

  • “Bard’s false information can shape our worldview. Bard can generate or enable false information in a few ways: from “hallucinations”—an informal term used to describe the false content or claims that are often output by generative AI tools; by reproducing misinformation and disinformation; and by reinforcing unfair biases. Because Google’s attempts to limit these are brittle, false information is being generated at an alarming speed.”
  • “Because Bard isn’t always factually accurate, it can and does get things wrong.”
  • In Google’s own words, “Bard’s responses might be inaccurate, especially when asked about complex or factual topics” further noting that “LLMs are not fully capable yet of distinguishing between what is accurate and inaccurate information.”

Effect On Pedagogy

The confluence of The Eliza Effect and chatbot inaccuracy and bias impacts pedagogy. Imagine an AI guest speaker in a live classroom setting. The guest speaker generates inaccurate or biased text. Is it hard to imagine a student saying, “But ChatGPT said it,” in response to a teacher correcting a chatbot? What about middle school students who are appropriately developmentally oppositional? Is it hard to imagine them siding with the computer?

Do you know anyone who believes something that is not true that they read on the internet? How are students any different?

A children's book titled "But I Read It On The Internet!"
But I read it on the internet! Image source: Know Your Meme.

What we know about The Eliza Effect tells teachers that students may intuit judgment and credibility on AI guest speakers. Should teachers risk students conferring that on inaccurate or biased text in real time?

Additionally, there is a parental content issue. As Common Sense Media says, “Parental permission is required, but this isn’t obvious. Educators who are using ChatGPT in their classrooms need to know that children must be age 13, and anyone under 18 must have a parent’s or legal guardian’s permission to use ChatGPT.”

Google says, “You still can’t access the Gemini web app with a Google Account managed by Family Link or with a Google Workspace for Education account designated as under the age of 18.”

Is it appropriate to use these tools with students under 18 through a teacher in a live classroom without parental consent?

What About Online Reasoning And Critical Thinking?

One bit of pushback I have received from teachers about AI guest speaker concerns is that students need to interact with chatbots to build their critical thinking, online reasoning, and digital citizenship.

I do not have an answer to that pushback. I have questions.

  • What are you currently doing to address these concerns? 
  • Is it working? If so, why would students not transfer those skills to chatbot-generated text?
  • If it does not work, why would it work with chatbot-generated text? Would that make students more susceptible to misunderstandings, considering The Eliza Effect?

Alternatives To The AI Guest Speaker

As a former Social Studies teacher, the first idea that comes to mind is primary source documents. For example, the Diary of Anne Frank suffices. There is no need for AI to replicate her. There are many online resources for primary source documents, such as the Digital Inquiry Group’s free Reading Like a Historian resource.

Anne Frank sitting at a desk with a book open in December 1941.
The Diary of Anne Frank is the best example of an alternative to an AI guest speaker. Image source: Wikimedia Commons.

Rather than having chatbots take on roles, why not have students do it themselves to learn perspective? Chapter 4 of Action Strategies for Deepening Comprehension by Dr. Jeffrey D Wilhelm details a staretgy called “hotseating” that helps students deepen their understanding of characters and concepts.

Students can create mini-podcasts where they interview classmates playing roles. Soundtrap is a web-based app for creating and editing audio. So is Adobe Podcast, which is currently in beta. Adobe Podcast uses AI to enhance sound quality.

As for online reasoning and critical thinking, have students evaluate the ethics of AI chatbots rather than using them through a teacher.

Three resources for this are:

Continuing The Conversation

Stay tuned next week for a blog post about a 100 percent ethical AI app. If that sounds like an absolute statement out of step with the tone and tenor of this post, wait until next week.

What do you think of pedagogy and the AI guest speaker? Do you see benefits to using this approach with students? How will The Eliza Effect affect your use of chatbots with students? Comment below or Tweet me at @TomEMullaney.

Does your school or conference need a tech-forward educator who critically evaluates AI? Reach out on Twitter or email mistermullaney@gmail.com.

Blog Post Image: The blog post image is a mashup of three images. The background is Education and reading concept by Sensay on Adobe Stock. Eliza Doolittle is from Wikimedia Commons. The robot is White male cyborg thinking and touching his head 3D rendering by sdecoret on Adobe Stock.

AI Disclosure:

I wrote this blog post without the use of any generative AI. That means:

  • I developed the idea for the post without using generative AI.
  • I wrote an outline for this post without the assistance of generative AI.
  • I wrote the post from the outline without the use of generative AI.
  • I edited this post without the assistance of any generative AI. I used Grammarly to assist in editing the post. I have Grammarly GO turned off.
  • I did not use any WordPress AI features to write this post.
  1. Edtech influencers I deeply respect have shared this strategy on social media. I am not sharing their names because this post is a critique of an instructional strategy, not individual edtech influencers. ↩︎
  2. As of January 29, 2024, Common Sense Media is not an unbiased evaluator of ChatGPT because it has entered into a partnership with OpenAI. ↩︎

15 responses to “Pedagogy And The AI Guest Speaker Or What Teachers Should Know About The Eliza Effect”

  1. Talking AI On The Shukes & Giff Podcast – Tom Mullaney Avatar

    […] My Eliza Effect and AI Pedagogy blog post. […]

    Like

  2. Dennis Avatar
    Dennis

    Tom, appreciate reading a thought-provoking piece like this. And I agree about primary sources documents. I believe this also calls for putting a premium on news literacy as well as getting people in face-to-face dialogue without a medium dictating “the truth”.

    Like

    1. Tom Mullaney Avatar

      Hey Dennis, thank you for reading! Misinformation is an issue. Chatbot generated text will only exacerbate it.

      Like

  3. nholt Avatar
    nholt

    Thanks, Tom. That’s really thoughtful and gave me several ideas to share with the faculty in the college of Ed. at the University of Georgia.

    Like

    1. Tom Mullaney Avatar

      Thank you! I am happy you found it helpful.

      Like

  4. Rufus Avatar
    Rufus

    Thanks, Tom. Interesting and thought provoking critique. I would only take issue with why, in the second paragraph Of Talking with the Dead, you question why anyone should talk to enslavers. Aside from the inauthentic nature of the interview (with AI) and obviously the age of your students, examining why people did what they did is a fascinating area. It would expand from just individuals to the culture and policies at the time. Depriving students of exposure to these sorts of issues risks missing a genuine and valuable learning opportunity. Thanks again.

    Like

    1. Tom Mullaney Avatar

      Thank you for the comment. I am not comfortable with students “asking” enslavers why they committed atrocities. Let’s set that aside. I think your comment is an example of The Eliza Effect. Chatbots are inaccurate and have no insight into the motivations of enslavers. They predict the next string of text. That’s it. No more. Assuming they have insight is personification, The Eliza Effect in action.

      Like

  5. The 100 Percent Ethical AI App – Tom Mullaney Avatar

    […] concerns about AI include the amplification of bias. I do not think there is bias in the AutoDraw data set, but you can judge for yourself by viewing […]

    Like

  6. Talking AI With GEG Italia – Tom Mullaney Avatar

    […] In the episode, we talked about some things I said in the blog post, Pedagogy And The AI Guest Speaker Or What Teachers Should Know About The Eliza Effect. […]

    Like

  7. Dean Shareski Avatar
    Dean Shareski

    Excellent and well thought out argument. My initial take away is to begin to do a better job of understanding how GPTs are best used.

    Like

    1. Tom Mullaney Avatar

      Thank you, Dean! So much of the conversation about AI and education is based on a fundamental misunderstanding of what LLMs are what they do.

      Like

  8. Teachers: Follow These Experts To Learn AI – Tom Mullaney Avatar

    […] these experts has helped me understand AI. Their work has influenced my posts about using a chatbot as a guest speaker and the 100% ethical AI app. With one exception, these are experts on AI, not K-12 education. Get […]

    Like

  9. AI Vocabulary for Teachers – Tom Mullaney Avatar

    […] As I wrote in an earlier blog post, “The Eliza Effect is the tendency to project human characteristics onto computers that generate text.” For more about The Eliza Effect and its implications for pedagogy, please read Pedagogy And The AI Guest Speaker Or What Teachers Should Know About The Eliza Effect. […]

    Like

  10. […] Pedagogy And The AI Guest Speaker Or What Teachers Should Know About The Eliza Effect – Tom Mullan… […]

    Like

  11. Talking AI With BustED Pencils – Tom Mullaney Avatar

    […] Dr. Johnny Lupinacci on their fantastic YouTube show and podcast, BustED Pencils. We spoke about The Eliza Effect and my blog post, AI Vocabulary For […]

    Like

Leave a comment

Create a website or blog at WordPress.com