Times are displayed in (UTC-05:00) Central Time (US & Canada) Change
About this paper symposium
Panel information |
---|
Panel 32. Solicited Content: Expanded Learning & Out-of-School Time |
Paper #1 | |
---|---|
Investigating Spatial and Math Language and Skill Development in Educational YouTube Videos for 3-to-5-Year-Olds | |
Author information | Role |
Corinne A. Bower, Ph.D., California State University, Los Angeles, United States | Presenting author |
Marie Lassaigne, California State University, Los Angeles, United States | Non-presenting author |
Elizabeth Plascencia, California State University, Los Angeles, United States | Non-presenting author |
Ani Avakian, California State University, Los Angeles, United States | Non-presenting author |
Wilder VonSchonfeldt, California State University, Los Angeles, United States | Non-presenting author |
Rebecca Dore, The Ohio State University, United States | Non-presenting author |
Alex Bonus, The Ohio State University, United States | Non-presenting author |
Abstract | |
Children have been gaining more opportunities to access online video content over the past decade (Houcade et al., 2015) and this access is being given to children as young as 12 months (Nansen & Jayemanne, 2016). Because educational media is often viewed (and even encouraged) as supplementary to a child’s formal education (Gözen et al., 2021; Puspita et al., 2022), this work aimed to evaluate the content quality. Specifically, in this current study we aimed to 1) examine the quality of educational language used in YouTube videos that are watched by 3- to 5-year-olds (and labeled as ‘educational’), and 2) examine the types of skills being taught in these videos. Spatial language describes how a scene or object relates to its location in space (e.g., the large triangle goes on top of the smaller square) and is correlated with spatial skill development (Bower et al., 2020; Polinsky et al., 2017; Szechter & Liben, 2004), which in turn is associated with later STEM achievement (Wai et al., 2009). Moreover, math language (e.g., counting and words such as many, most, few) is an integral part of the acquisition of early math skills (Purpura et al., 2017). However, there are differences in the types and frequency of math language children hear in formal educational settings (Rudd et al., 2008). Thus, we chose to examine the proportion of spatial and math language in these ‘educational’ YouTube videos. Our second aim was to code for the broad types of skills taught in these videos, including conceptual (e.g., reading, counting), social (e.g., social rules, theory of mind), and practical (e.g., bathing, dressing) skills (AAIDD, 2023). Participants were U.S. children (N=232, Mage=4.27, SD=1.80, 55.33% boys) whose parents completed a questionnaire that had them submit the links of the three most recent YouTube videos their child watched. For our preliminary analysis and coding of spatial and math language, we transcribed 57 of the submitted videos that were categorized as educational and assessed the proportion of spatial and math language within each video’s first 15 minutes. For the skill content analysis, we preliminarily coded 47 videos. Results suggest that, on average, 5% of the language was spatial (SD=3%), and 2% (SD=4%) was math. For the skill content analysis, 81% of the videos demonstrated conceptual skills; 86% demonstrated social adaptive skills; and 93% demonstrated practical adaptive skills. Thus, educational online video content is not particularly rich in spatial and math language. Content creators could enhance the educational value of their videos by including more spatial and math language, given their importance in early cognitive development and later STEM achievement. While practical skills were the most common focus in the videos, which aligns with the developmental stage of young viewers, it is promising that a substantial proportion also targeted conceptual and social skills. This balance suggests that educational videos are addressing a broad range of developmental needs, but there remains a clear opportunity to enrich them with language that supports more advanced cognitive skills. |
Paper #2 | |
---|---|
When Technology Meets Tinkering: Promoting Museum Engineering Engagement through Digital Storytelling | |
Author information | Role |
Dr. Lauren C. Pagano, Ph.D., Northwestern University, United States | Presenting author |
Riley E. George, Loyola University Chicago, United States | Non-presenting author |
David H. Uttal, Northwestern University, United States | Non-presenting author |
Catherine A. Haden, Loyola University Chicago, United States | Non-presenting author |
Abstract | |
Museum tinkering exhibits offer rich opportunities for families’ engineering engagement and learning (Author et al., 2020). In recent years, many parents and children have been observed using smartphones in museum exhibits, which may affect parent-child interactions and learning processes (Kelly et al., 2023; Author et al., 2024). Positive Technological Development Theory (Bers, 2012) proposes that when used in collaborative, creative ways, technology may support—not inhibit—learning. We examine whether digital storytelling, in which children record video narratives for an imagined audience, promotes families’ verbal engagement and engineering talk during and after tinkering. We further explore whether children’s attitudes toward technology and parents’ creation of smartphone recordings relates to engagement and engineering talk. Forty-one 5–10-year-old (M = 7.07) children (44% female; 46% white, 11% black, 11% Latine, 8% Asian, 24% multiracial) wore chest-mounted cameras while creating a cardboard project in a museum tinkering exhibit. Children were randomly assigned to a control condition (N = 21), in which they were asked to behave normally, or digital storytelling condition (N = 20), in which they were instructed to “use the camera to talk to your audience”. Researchers tracked whether children had neutral (78%) or excited (22%) feelings about the cameras and whether parents used personal smartphones to take photos/videos during tinkering (73% did not, 27% did). After tinkering, families recorded video reflections about their experience in a nearby digital storytelling exhibit. Families’ tinkering and reflection conversations were transcribed and analyzed for total talk (wordcount) and references to engineering practices (e.g., planning, testing, redesigning). As shown in Table 1, children in the digital storytelling condition talked more during tinkering than children in the control condition, F(1, 33) = 13.34, p < .001, and this effect tended to be more pronounced for children excited about cameras, F(1, 33) = 4.01, p = .05. Children in the digital storytelling condition also made more references to engineering practices than children in the control condition, F(1, 33) = 7.07, p = .012. Although parents’ total talk did not differ by condition, parents who took photos/videos while tinkering talked significantly more overall (F(1, 34) = 12.40, p < .001) and made more references to engineering (F(1, 34) = 11.16, p = .002) than parents who did not take photos/videos. There were no effects of condition, child excitement about the camera, or family photo-taking on parents’ or children’s overall talk during their post-tinkering reflections. A binary logistic regression, X2(4) = 18.09, p = .001, did however indicate that children excited about cameras (67%) were more likely to refer to engineering practices in their post-tinkering reflections than children whose feelings about cameras were neutral (4%), Exp. B = 103.56, p = .003. Further analyses will consider digital storytelling in additional tinkering programs. Digital storytelling can support children’s communication and engineering talk during tinkering but may be more beneficial for children who already enjoy using technology. Parents who spontaneously create digital stories in museum exhibits similarly talk more about engineering during the activity. |
Paper #3 | |
---|---|
Leveraging Multi-Generational Video Chat as a Source of Spatial Language for Young Children | |
Author information | Role |
Dr. Jennifer M. Zosh, Ph.D., Penn State University, Brandywine, United States | Presenting author |
Alexus G. Ramirez, University of Maryland, College Park, United States | Non-presenting author |
Victoria Coons, University of Delaware, United States | Non-presenting author |
Roberta Michnick Golinkoff, University of Delaware, United States | Non-presenting author |
Abstract | |
While many headlines highlight the perils of screen time for today’s families, not all screen time is equal. Video chat, in particular, provides the opportunity for high-quality and responsive interactions between children and adults (e.g., Gaudreau et al., 2020). Therefore, a more nuanced approach to studying how screen time can be leveraged to support development is critical. Here, we explore whether high-quality spatial language input occurs during video chat conversations between children and their grandparents. Spatial language in everyday, in-person interactions has been shown to vary widely, generally finding that higher levels of spatial language relate to increases in spatial skills for young children (e.g., Pruden et al. 2011) with potentially long-lasting impacts in STEM domains (Newcombe, 2010). The current study investigates whether spatial language varies when a grandparent-grandchild dyad engages with different forms of media content over video chat (i.e., photos or a video). We hypothesized that video chat provides an important opportunity for sharing spatial language but that different forms of media content may differentially support various types of spatial language. Forty-three grandparents (23% male) and their grandchildren between the ages of 48- to 72-months (grandchildren 42% male, 91% White) participated in a one-time video chat session. Using a within-subjects design and counterbalanced order, each dyad talked about a video of an unfamiliar animal and three static images of childhood scenes (e.g., a playground). A coder unaware of the condition and order coded transcripts for spatial language using a coding scheme previously used in our laboratories. This coding scheme included sub-categories for spatial language (i.e., spatial dimensions [size], shapes, locations and directions [relative position], etc.,). Preliminary analyses reveal no significant impact of content-type (photos or video) on utterances per minute (Mvideo=24.63, Mphoto=24.4) or utterances containing spatial language per minute (Mvideo=3.43, Mphoto=3.19). The proportion of utterances that contained spatial language relative to the total number of utterances across content-types also was not significantly different (Mvideo=0.16, Mphoto=0.14). However, content type significantly impacted the types of spatial language used during video chat. Specifically, the grandparent-grandchild dyad was more likely to talk about spatial dimensions when engaging with the video versus the photos (Mvideo=0.32, Mphoto=0.12), t(20)=3.35,p<.01. Conversely, the opposite effect was seen in regard to spatial language about location and direction (Mvideo=0.45, Mphoto=.67), t(20)=3.21,p<.01. In summary, our results highlight that video chat provides an important opportunity for sharing high-quality spatial language. Grandparents can assist in this role despite their physical distance, providing important support for children and families. Lastly, different types of video chat experiences may differentially support the use of various types of spatial language, suggesting that it is important to provide varied experiences on video chat. This study is a first step in better understanding how to leverage this technology to not only build relationships but also support key areas of development such as spatial thinking. |
Paper #4 | |
---|---|
Teaching Math Vocabulary Through Storytelling: Comparing AI and Human Partners’ Effectiveness | |
Author information | Role |
Echo Zexuan Pan, University of Michigan, United States | Presenting author |
Trisha Thomas, Harvard University, United States | Non-presenting author |
Ying Xu, Harvard University, United States | Non-presenting author |
Abstract | |
Introduction Learning mathematical vocabulary, such as more and equal, enables children to comprehend and articulate quantitative concepts effectively, laying a foundation for their future academic development and everyday problem-solving skills (Vanluydt et al., 2021). Explicit instruction of mathematical vocabulary could be challenging for young children; embedding such vocabulary within narrative stories is an effective way to make the abstract words more tangible and comprehensible (Hassinger-Das et al., 2015; Kirsch, 2016; Liu et al., 2019). Such story-based learning approach typically involves collaborative storytelling, where a knowledgeable adult, such as a teacher or caregiver, guides children to engage in dialogues within a storyline involving target vocabulary (Coyne et al., 2007). However, children’s engagement with this type of math-focused storytelling may vary depending on their teachers or caregivers’ availability, awareness, or skills. Generative artificial intelligence (AI) has the potential to simulate instructional dialogue, thereby presenting intriguing opportunities for children to engage with mathematical language through storytelling with a complementary, digital companion. Emerging research indicates that AI can effectively support children’s language learning by posing story-related questions and providing targeted feedback (Zhang et al., 2024). However, despite the theoretical potential, its actual impact on children’s learning outcomes requires further empirical investigation and validation. To this end, we carried out a study to develop and evaluate a GPT-4-based conversational agent, deployed on a smart speaker, which can co-create stories with young children while teaching mathematical vocabulary. We conducted a randomized controlled trial in which children were assigned to co-create mathematical stories with one of three partners: an AI agent, a human face-to-face (i.e., present human), or a human concealed from their view (i.e., hidden human). Hypotheses 1. Collaborative storytelling can improve children’s mathematical vocabulary learning outcomes. 2. Children’s learning outcomes remain comparable, no matter which storytelling partner they interact with. Study Population A total of 119 children from the Mideastern US (55.46% female; mean age = 6.52 years; age range = 4-9 years) participated in our study. Methods We developed a 24-item questionnaire to assess children’s knowledge of six target mathematical vocabulary (add, subtract, more, half, estimate, sum) along four dimensions: definition, recall, transfer, and practice. The same items were administered in pre- and post-tests, with modifications to the nouns used in the scenarios. Results Paired t-tests revealed that children performed better on the post-test compared to the pre-test, t(118) = 4.19, p < .001, and this improvement was consistent across the three dimensions except definition (Figure 1). The observed learning gain was also comparable across conditions (Figure 2). There were no significant differences between children who co-created stories with the AI agent and those who interacted with a present human partner (β = 0.97, p = .300) or hidden human partner (β = 0.76, p = .421). Conclusion Our study empirically demonstrates the effectiveness of collaborative storytelling in supporting mathematical vocabulary learning, while highlighting the potential of generative AI as a scalable partner alongside humans. We believe this work contributes to the expanding field of AI in education and offers insights into child-AI interactions. |
⇦ Back to session
Out-of-School Time Tech: Supporting Children’s STEM Engagement through Digital Tools and Conversations
Submission Type
Paper Symposium
Description
Session Title | Out-of-School Time Tech: Supporting Children’s STEM Engagement through Digital Tools and Conversations |