Times are displayed in (UTC-05:00) Central Time (US & Canada) Change
About this paper symposium
Panel information |
---|
Panel 11. Language, Communication |
Paper #1 | |
---|---|
The role of parental scaffolding in Infant’s visual experiences when hearing nouns versus verbs | |
Author information | Role |
Lichao Sun, University of Houston, United States | Presenting author |
Hanako Yoshida, University of Houston, United States | Non-presenting author |
Abstract | |
Young infants learn nouns earlier and faster than verbs (e.g., Gentner, 1982), and nouns dominate their early vocabularies cross-linguistically (e.g., Fenson et al., 1994; Goldin-Meadow et al., 1976; Gentner, 1982; Jackson-Maldonado et al., 1993; Imai et al., 2008; Waxman et al., 2013). Eye-tracking studies indicate that parents help segregate the target referent from the complex visual scenes and provide a clean scene for infants by mapping the heard label to the attended referent, suggesting the role of social scaffolding in noun-referent mapping (Suarez-Rivera et al., 2019; Yu & Smith, 2016; Sun & Yoshida, 2022). But how do infants learn verbs and distinguish verbs from nouns? The present study examines how infants’ attention changes as a function of word types and how parental scaffolding varies in supporting word-learning associated experiences. The present study observed a 5-minute-20-second object play with a sample of 60 parent-infant dyads with infants at the mean age of 11.1 months (SD=4.3, males = 37). During object play, the parent was asked to freely play with the infant with a set of toys and demonstrate four nouns and four verbs (cued by trial, each lasting 40 seconds). Both the parent and infant wore the head-mounted eye trackers to record their egocentric scenes and corresponding gaze behaviors. Trained coders annotated target behaviors, such as (1) individual’s gaze allocation according to four regions of interest, including objects, their own hands, and agent’s hands and face, (2) parental phrases: verbal instances with object names versus verb actions, and (3) individual’s handled action upon objects. The generalized mixed-effect models were applied to accommodate the variations within the dyads in displaying the objects, and object manipulation was nested by dyads. The major findings highlight the various referential needs for learning, i.e., infants exhibited different attention patterns when hearing nouns versus verbs: infants had more proportion of frames attending to the objects when hearing nouns (ß = 17. 75, p < .001), whereas infants looked for more social cues and attended significantly more on parents’ faces and hands when hearing verbs, ß = 16.32, p < .001. Moreover, object manipulation was coordinated with word learning and differed as a function of word types: (1) the parent played a leading role in presenting the object during naming; infants primarily had more time fixating on the objects handled by parents accompanying relevant labels, regardless of the word type; (2) while the parent predominantly named their handled object, they tended to name the verb actions when the attend object was handled by the infant and parent together, ß = 2.37, p < .001 (see Figure 1). The present study probes into infants’ visual experiences when parents named nouns and verbs, and it documents that infant attention patterns differ as a function of word types. Specifically, verb learning tends to occur when motor coordination is established. The impact of learning contexts will be further discussed in terms of the significance of domain-general mechanisms for both noun and verb learning, as well as potential individual differences. |
Paper #2 | |
---|---|
Children learning verbs need to ignore distractions: Can they? | |
Author information | Role |
Jane Childers, Trinity University, United States | Presenting author |
Emily Haynes, Trinity University, United States | Non-presenting author |
Abstract | |
Learning verbs is critical to learning one’s native language. In a recent study in both South America and the US, children hearing verbs in the home saw possible referents that fit those verbs’ meanings approximately 50% of the time in both cultures (blinded). Given this, children could need to ignore events and actions that co-occur with verbs up to 50% of the time! Two experimental studies ask whether and when can children ignore distracting events while learning new verbs. In Study 1, an eye tracker was used to examine whether children’s looking patterns varied when shown events linked to a new verb vs. distractor events. Two 1/2- (n= 24), 3 1/2- (n=31) and 4 1/2-year-olds (n=21) saw dynamic relevant and irrelevant scenes and heard new verbs while a Tobii x30 eye tracker recorded their eye movements. One sample t-tests show children in each age group were able to extend new verbs to new scenes at test (ps= .002). Additionally, across age groups, when viewing relevant events, children increased their looking to the hands (actions) as relevant trials progressed and decreased their looking to the agent (less informative for verbs), shown in a Trial number x AOI, F(1,69)=30.14,p< .001. In contrast, when viewing distracting events, children decreased their looking to hands and maintained their attention to agent, Trial number x AOI, F(1,69)=14.10,p< .001 (Fig. 1). Thus, children’s visual attention to agents and actions differed depending on whether the events were linked to a new verb. This is the first study to show this pattern of visual attention during verb learning, and thus these results reveal attentional strategies children may use when learning verbs. Study 2 simulates everyday situations in which children see different events at the same time while hearing verbs (e.g., soccer game), and have to deduce which event is relevant for learning that verb’s meaning (e.g., “score”). 3½- (n=19) and 4½-year-olds (n=17) saw 3 events in a scene and heard a new verb (Fig. 2). Across 3 of these learning trials, children were able to focus on the repeated action, and ignore the distractor actions, extending the verb at test (ps <.05). Three follow-up studies examined whether seeing 4 events or 5 events before test affected learning; results show a significant decrease in verb extensions in the 5 event vs. the 3 event one (p< .05). Additional studies with 2 events will ask whether children can learn from fewer than 3 learning trials if the agent stays the same; data collection in progress. These studies are important for understanding cognitive processes that may underlie verb learning. Overall, results show children benefit from seeing events they can compare, and that they ignore distracting events, as it appears they need to do. Links to key theories in verb learning, including structural alignment (e.g., Imai & Childers, 2020) and statistical learning (e.g., Smith & Yu, 2008; Scott & Fisher, 2012) will be offered. |
Paper #3 | |
---|---|
The Role of Analogical Gestures in Five-Year-Olds’ Analogical Reasoning | |
Author information | Role |
Alice Xu, University of California, Los Angeles, United States | Presenting author |
Madison Bishop, University of California, Los Angeles, United States | Non-presenting author |
Catherine M. Sandhofer, University of California, Los Angeles, United States | Non-presenting author |
Abstract | |
Analogical reasoning is a fundamental component of cognitive development, enabling individuals to recognize and map structural similarities across different concepts and domains. This ability underpins both learning and problem-solving. However, young children frequently encounter challenges with analogical reasoning tasks due to the cognitive demands placed on their developing skills. Specifically, inhibitory control and working memory—key to abstract reasoning—are still maturing during early childhood. As a result, children may struggle to focus on the relational aspects of analogies while suppressing distractions from perceptual features. Previous research has demonstrated that co-speech gestures, which are spontaneous hand movements accompanying speech, can serve as powerful cognitive tools. These gestures not only enhance communication but also facilitate cognitive processing by directing attention and alleviating the working memory load. A particularly promising yet understudied form of co-speech gesture is the analogical gesture (Cooperrider & Goldin-Meadow, 2017). Analogical gestures, as relational metaphoric expressions, represent abstract relationships between two or more elements, extending beyond simple attribute descriptions. This study is the first to experimentally assess the role of analogical gestures in supporting young children’s analogical reasoning. We hypothesized that analogical gestures uniquely aid children in solving analogy problems that involve abstract semantic relations by focusing their attention on the relevant semantic relationship in the source analogy and reducing the cognitive load typically required to process such tasks. Through this scaffolding, analogical gestures may promote deeper relational thinking during early childhood. The study employed a two-by-two mixed design to examine the effects of analogical gestures (between-subjects factor) on children’s performance in a Relational Match-to-Sample (RMTS) task, focusing on two abstract semantic relations (within-subjects factor): antonymy (i.e., opposites) and meronymy (i.e., part-whole). Participants were five-year-old children recruited from local preschools. The analogical gesture condition received verbal descriptions accompanied by analogical gestures illustrating the relationships in the source analogy, while the control condition received verbal descriptions alone. Each child completed three practice trials for each relation, during which they solved pictorial analogy problems with feedback. This was followed by eight test trials per relation type, where no feedback was given after each trial. We evaluated children’s performance based on their accuracy across the test trials. Data collection is still ongoing (current n=10). Preliminary results indicate that children in the analogical gesture group performed better than those in the control group in both antonymy trials (M = 6.00 vs. M = 5.67) and meronymy trials (M = 5.57 vs. M = 4.67). These early findings suggest that analogical gestures may help young children better understand and reason through abstract relational concepts, and contribute to a larger literature indicating positive effects of multi-modal influences on learning. |
Paper #4 | |
---|---|
You've got THIS! Investigating Children’s Acquisition of Demonstratives in Naturalistic Interaction | |
Author information | Role |
Yayun Zhang, Max Planck Institute for Psycholinguistics, Netherlands | Presenting author |
Tianai Dong, Max Planck Institute for Psycholinguistics, Netherlands | Non-presenting author |
Carolina Rodríguez Chavarría, University of Costa Rica, Costa Rica | Non-presenting author |
Caroline Rowland, Max Planck Institute for Psycholinguistics, Netherlands | Non-presenting author |
Chen Yu, The University of Texas at Austin, United States | Non-presenting author |
Paula Rubio-Fernandez, Max Planck Institute for Psycholinguistics, Netherlands | Non-presenting author |
Abstract | |
All languages have demonstratives: grammatical words such as THIS and THAT or HERE and THERE in English. They are among the first 50 words children produce across languages (Diessel & Monakhov, 2023) as they are a universal tool for establishing joint attention on a referent. Understanding the acquisition of demonstratives is important because they are highly frequent in parent-child interactions and are a foundational grammatical class present in all human languages. However, acquiring grammatical words poses special challenges as their meanings are not grounded in the physical world (e.g., THIS or THAT do not have constant meanings that map onto physical entities the way concrete nouns do). Adult demonstrative use relies on spatial and social cognition since speakers need to monitor both the location of the intended referent and the attention focus of the listener (Jara-Ettinger & Rubio-Fernandez, 2024). This creates an interesting puzzle for demonstrative acquisition since young children need to learn the spatial and attentional contingencies of these grammatical words. In the current study, we quantified the kind of demonstrative input children receive in a naturalistic toy play interaction. Specifically, we aim to better understand (i) how demonstrative use changes as a function of the object’s position in relation to the parent and child locations and (ii) how parents use demonstratives to direct children’s attention towards an object in real-time interaction. We used an existing naturalistic parent-child toy-play dataset (Yu, Zhang, Slone & Smith, 2021). In this dataset, English-speaking parents and children (19.3 m.o.) were provided with 10-24 toys to play with for 15 minutes (Fig.1-Left). Parent and child each wore a head-mounted eye-tracker, providing rich moment-by-moment gaze data (Fig.1-Right). We first extracted all the moments when parents used THIS/THAT to refer to a toy identifiable in the child’s view. We coded the referent’s relative location (i.e. close to parent or child) and derived real-time attentional measures of whether the child looks at the intended referent before/after the demonstrative. We found that (i) parents frequently use demonstratives during play. Parents use THIS more often for objects close to them and THAT for objects close to the child (THIS: close-to-child:18%, close-to-parent:69%, other:13%; THAT: close-to-child:51%, close-to-parent:28%, other:21%), suggesting that even when parent and child are in relatively close proximity, parents’ demonstrative use changes as a function of referent position as well as speaker and listener locations, providing critical input for children to distinguish near and far space. (ii) Demonstratives are useful tools for guiding/maintaining children’s attention to the intended referent (Fig.2). In the cases of THIS, the child’s proportion of referent looks peaked after THIS onset, whereas in the cases of THAT, the child’s proportion of referent looks was already at peak before THAT onset. One possible explanation is that parents use THIS to refer to something close to themselves, which requires the child to switch attention. THAT is most likely used in cases where parents refer to something close to the child, likely something the child is already attending to or manipulating. |
⇦ Back to session
Developing a comprehensive understanding of word learning processes beyond nouns
Submission Type
Paper Symposium
Description
Session Title | Developing a comprehensive understanding of word learning processes beyond nouns |