Times are displayed in (UTC-05:00) Central Time (US & Canada) Change
About this paper symposium
Panel information |
---|
Panel 24. Technology, Media & Child Development |
Paper #1 | |
---|---|
How Children Understand AI: Children's Mental Models of Visual and Text Based Generative AI Models | |
Author information | Role |
Eliza Kosoy, UC Berkeley, United States | Presenting author |
Anoop Sinha, Google DeepMind, United States | Non-presenting author |
Soojin Jeong, Google DeepMind, United States | Non-presenting author |
Tanya Kralkic, Google DeepMind, United States | Non-presenting author |
Abstract | |
Artificial intelligence (AI) is increasingly present in daily life, affecting areas like education, healthcare, and social media. As AI systems, such as ChatGPT and DALL-E, become more prevalent, understanding how children form and update mental models of AI is crucial for shaping future technology. While research has examined how children perceive robots, little has been done on how they view generative AI. This study seeks to explore how children aged 5-12 perceive and interact with generative AI, contributing to the design of AI tools that align with children's evolving mental models. Two studies were conducted with a total of 33 children aged 5-12 (Study 1: 18 participants, Study 2: 15 participants). Participants were recruited from local children’s museums in the Bay Area. The studies were approved by UC Berkeley’s IRB and pre-registered on aspredicted.com. Participants were introduced to AI models, including text-based models like ChatGPT and visual-based models like DALL-E, and their perceptions were surveyed before and after interactions with these AI systems. In both studies, children were asked a series of binary questions, such as “Is AI friendly or scary?” and “Can AI feel emotions like happy or sad?” Before interacting with AI, 67% of children found AI “friendly,” and this increased to 85% after using the models. Pre-interaction, 47% of children believed that AI could have emotions, but this decreased to 33% after interacting with the systems. Similarly, fewer children thought AI could "get upset" after using it, dropping from 60% to 33%. (Figure 2) Additionally, children were observed while interacting with the AI systems to determine what kinds of queries they made. Results showed that when using ChatGPT, 63% of children’s searches were related to real-world, known concepts. However, when using DALL-E, 28% of children’s queries were imaginative, focusing on things that do not exist in the real world. (Figure 1) This research highlights the generally positive outlook children have toward AI, as well as how their mental models shift after interacting with generative AI systems. Children tend to view AI as friendly and beneficial, though their perception of AI having human-like emotions diminishes after direct engagement. Additionally, children’s curiosity appears to be more engaged when using visual-based AI models, as they are more likely to explore imaginative and novel ideas compared to text-based models. These findings suggest that children's interactions with generative AI can shape their mental models in dynamic ways. Future AI tools designed for children should foster this curiosity and creativity while addressing the gaps in children's understanding of AI. By better aligning AI design with children's evolving mental models, we can create tools that are not only engaging but also help children develop more nuanced understandings of how AI works and its potential role in their lives. |
Paper #2 | |
---|---|
Children's persistence and collaboration with mistaken robots. | |
Author information | Role |
Dr. Teresa Flanagan, University of Chicago, United States | Presenting author |
Collin Pitts, Duke University, United States | Non-presenting author |
Nicholas Georgiou, Yale University, United States | Non-presenting author |
Brian Scassellati, Yale University, United States | Non-presenting author |
Tamar Kushnir, Duke University, United States | Non-presenting author |
Abstract | |
Young children readily engage with robots as collaborative partners – children will view robots as agents with minds and emotions (Flanagan et al., 2023), trust helpful robots (Brink & Wellman, 2020), and will socialize and play with robots (Bethel et al., 2011). Furthermore, having a robot as a collaborative partner is beneficial to children’s motivation and effort in difficult tasks (Chen et al., 2020). It remains an open question, however, as to how children respond when their robot partners mess up. With human partners, children use various strategies in response to a mistake, such as teaching an ignorant partner (Ziv et al., 2016), forgiving an accidental error (Amir et al., 2021), or protesting a dissent (Rakoczy et al., 2008). Will the same be true for robots? In this study, we compare children’s behaviors in a collaboration with a robot that repeatedly makes either accidental mistakes or intentional obstructions. Four- to 7-year-old children (Current N = 39, M = 6.42, SD = 1.12, 19 Female; data collection ongoing) played a short, collaborative game with a humanoid robot (see Fig. 1). In the game, they need to get a frog across a pond: this is achieved by each of them pressing their respective button after a countdown to get the frog to hop to a lily pad. After a few practice trials the experimenter left the room, telling the child that they can come and get her if they are done, need help playing the game, or want to stop. After the three successful attempts, the robot repeatedly misses the button and the frog fails to advance further. In the Apologetic condition (randomly assigned), the robot signals each failure was accidental by apologizing (“oops, I’m sorry, I missed the button this time”). In the Uncooperative condition, it signals each failure was intentional (“ha ha ha, I did not want to press the button this time”). We measured children’s behaviors to each miss (e.g., forgiving, soothing, teaching). We also looked at persistence by measuring whether children stopped playing to get the experimenter's help. Preliminary results suggest that children use various strategies to mitigate a robots’ error, varying by condition (see Fig. 2). We found that children were more likely to get help from the experimenter if the robot was uncooperative (52%, N = 11/21) than if the robot apologized (17%, N = 3/18), OR = 5.50, p = .027. Behavioral coding supports the idea that children felt more positively towards the apologetic robot: children engaged in cooperative behaviors with the apologetic robot, such as forgiveness (50%, N = 9), soothing (39%, N = 7), and teaching or helping (56%, N = 10), but not with the uncooperative robot (forgiveness: 10%, N = 2; soothing: 0%, N = 0; teaching/helping: 10%, N = 2). Together, these findings suggest that children will disengage and seek help when robots are antisocial, but children will maintain a collaborative partnership with robots that signal prosocial qualities. This research has important implications for children’s motivation and engagement with technologies, particularly in education. |
Paper #3 | |
---|---|
Virtuous Ignorance in the Technological Era: Children’s Intuitions about Smart Speaker Knowledge. | |
Author information | Role |
Dr. Lauren Girouard-Hallam, University of Michigan, United States | Presenting author |
Allison J. Williams, Boston University, United States | Non-presenting author |
Judith H. Danovitch, University of Louisville, United States | Non-presenting author |
Kathleen Corriveau, Boston University, United States | Non-presenting author |
Abstract | |
Children frequently rely on the internet and internet-based devices to obtain answers to questions that they have about the world around them (e.g., Girouard-Hallam & Danovitch, 2022a; Lovato et al., 2019; Wang et al., 2019). Recent research has suggested that by age 9, children recognize that it is better for people to say that they cannot know the answer to questions that ask about exact numbers (e.g., the number of blades of grass in New York State) or unpredictable future events (e.g., the name of the next planet; Kominsky et al., 2016). This “virtuous ignorance”, although established in literature about children’s trust in human informants, has not previously been tested with technological agents. Thus, this study examines children’s beliefs about the virtuous ignorance of smart speakers when answering questions about exact numbers and future events. Participants included 118 5-to 12-year-olds (Mage=9.12, range=5.01-12.91; 59 boys, 59 girls). The majority of children were identified by their parents as White (46%); an additional 23% were Mixed Race/Ethnicity, 19% were Asian, 4% were Black/African American, 3% were Hispanic/Latino and 5% did not provide an answer. Participants viewed ten questions from 4 different categories asked by the researcher and responses from two competing voice assistants, one of which was virtuously ignorant (e.g., “I do not know, because that cannot be answered precisely”) and one of which provided an exact answer (see Table 1 for examples). Children were asked which of the two smart speakers they believed was the better smart speaker. We used a cross-classified two-level generalized mixed effects model to predict the effect of children’s age and the question category on children’s judgments about which smart speaker had given the better answer. There were simple main effects of age and category subsumed by a significant two way interaction between category and age such that older children’s belief that a virtuously ignorant response was better increased with age for the known number and known future questions (see Figure 1). Interestingly, children’s responses about unknown number items significantly differed from their responses about unknown future items (p<.001), such that all children selected the virtuously ignorant smart speaker more frequently for unknown future questions than for unknown number questions. Children did not select the virtuously ignorant speaker in unknown number trials at rates above 50% until age 10. By age 10, children select the virtuously ignorant smart speaker for unknown future items in nearly 100% of trials, but still select the exact answer smart speaker when the question is about unknown numbers in 33% of trials. By age 7, children begin to distinguish between cases where a smart speaker should be able to provide an exact answer and when a virtuously ignorant response is more appropriate. However, children do not have clear intuitions about the superiority of virtuous ignorance responses about answers concerning exact number from smart speakers until age 9, suggesting that children under 9 expect a smart speaker to provide an exact answer to unknowable questions at least some of the time. |
⇦ Back to session
Children and their (imperfect) bots: New evidence for children’s judgments of technologies and its limitations.
Submission Type
Paper Symposium
Description
Session Title | Children and their (imperfect) bots: New evidence for children’s judgments of technologies and its limitations. |