sensor motor theory 与具身认知 手势教学
具身认知与感觉运动系统的核心理论基础
这组文献探讨了具身认知的基本定义、演化观点及其理论边界,为理解身心关系提供了核心框架。包括对预测大脑模型、感觉运动系统在认知中的角色以及具身理论的不同流派进行的系统性阐述。
- Embodied Cognition is Not What you Think it is(Andrew D. Wilson, Sabrina Golonka, 2013, Frontiers in Psychology)
- Six views of embodied cognition(Margaret Wilson, 2002, Psychonomic Bulletin & Review)
- Whatever next? Predictive brains, situated agents, and the future of cognitive science(Andy Clark, 2013, Behavioral and Brain Sciences)
- 从具身视角探讨动作的心理效应(戴亚芹, 2021, 社会科学前沿)
- An Evolutionary Upgrade of Cognitive Load Theory: Using the Human Motor System and Collaboration to Support the Learning of Complex Cognitive Tasks(Fred Paas, John Sweller, 2011, Educational Psychology Review)
语言、隐喻与空间概念的具身表征及模拟机制
这组研究侧重于探讨人类如何通过感觉运动经验构建抽象概念(如时间、空间和隐喻)。研究涵盖了手势作为模拟动作的角色、跨语言的空间参考框架,以及隐喻手势如何映射思维通路。
- The choreography of time : metaphor, gesture and construal(Jean-Rémi Lapaire, 2016, HAL (Le Centre pour la Communication Scientifique Directe))
- 隐喻手势与语言及认知关系的研究(谭梦妮, 陈宏俊, 2022, 现代语言学)
- Semantic Typology and Spatial Conceptualization(Eric Pederson, Eve Danziger, David P. Wilkins, Stephen C. Levinson, Sotaro Kita, Gunter Senft, 1998, Language)
- Mapping the brain's metaphor circuitry: metaphorical thought in everyday reason(George Lakoff, 2014, Frontiers in Human Neuroscience)
- Time in the mind: Using space to think about time(Daniel Casasanto, Lera Boroditsky, 2007, Cognition)
- Time (also) flies from left to right(Julio Santiago, Juan Lupáñez, Elvira Pérez Vallejos, María Jesús Funes, 2007, Psychonomic Bulletin & Review)
- Grounding language in action(Arthur M. Glenberg, Michael P. Kaschak, 2002, Psychonomic Bulletin & Review)
- Iconicity as a General Property of Language: Evidence from Spoken and Signed Languages(Pamela Perniss, Robin L. Thompson, Gabriella Vigliocco, 2010, Frontiers in Psychology)
- Visible embodiment: Gestures as simulated action(Autumn B. Hostetter, Martha W. Alibali, 2008, Psychonomic Bulletin & Review)
- Do We Really Gesture More When It Is More Difficult?(Uta Sassenberg, Elke van der Meer, 2010, Cognitive Science)
儿童语言习得、互动行为与认知能力的发展研究
这些文献关注婴幼儿及儿童阶段的发展,探讨感觉运动经验(如指点手势、运动能力)如何促进词汇习得、指令执行和心理理论的形成。强调了亲子互动中的多模态资源对早期认知发展的关键作用。
- 多模态视角下婴幼儿指令行为的会话分析研究(王池柳, 张兵欣, 董博宇, 2024, 现代语言学)
- 儿童词汇理解与表达影响因素的研究进展(Unknown Authors, 2025, 现代语言学)
- Experience with pointing gestures facilitates infant vocabulary growth through enhancement of sensorimotor brain activity.(Virginia C. Salo, Ranjan Debnath, Meredith L. Rowe, Nathan A. Fox, 2022, Developmental Psychology)
- 身体经验对儿童概念习得的影响:具身认知的视角(陈银芳, 刘 钊, Unknown Journal)
- Gesture in Language(Morgenstern, Aliyah, Goldin-Meadow, Susan 1949-, 2021, No journal)
- Why the Child's Theory of Mind Really Is a Theory(Alison Gopnik, Henry M. Wellman, 1992, Mind & Language)
具身教学设计、教育技术与学科应用实践
此分组专注于教学实践层面,探讨如何利用具身原则、多模态表征和新兴技术(如VR、混合现实)优化课堂教学。特别涵盖了数学分析教学的可视化、手势追踪效应以及多媒体学习中的具身设计原则。
- The Embodiment Principle in Multimedia Learning(Logan Fiorella, 2021, Cambridge University Press eBooks)
- Immersive VR and Education: Embodied Design Principles That Include Gesture and Hand Controls(Mina C. Johnson‐Glenberg, 2018, Frontiers in Robotics and AI)
- 论具身学习理论的课堂实践(邱柏杨, 夏友奎, 2019, 社会科学前沿)
- 基于多模态表征与认知负荷理论的数学分析可视化教学案例设计与实践探索(杜 刚, 殷 芳, 2025, 教育进展)
- 手势追踪效应及其理论解释(左跟梅, 林立甲, Unknown Journal)
- Teaching training in a mixed-reality integrated learning environment(Fengfeng Ke, Sungwoong Lee, Xinhao Xu, 2016, Computers in Human Behavior)
- Teaching with embodied learning technologies for mathematics: responsive teaching for embodied learning(Virginia J. Flood, Анна Шварц, Dor Abrahamson, 2020, ZDM)
- Embodied Cognition in Learning and Teaching(Martha W. Alibali, Mitchell J. Nathan, 2018, No journal)
- 指向深度学习的具身教学情境构建(弓 妤, Unknown Journal)
二语习得中的多模态加工与感知觉强化
该组文献专门研究在第二语言(L2)学习中手势与多模态资源的特殊效用,如超语(Translanguaging)实践、通过手势动作增强词汇记忆深度,以及“语音运动偶像”等自然教学法的应用。
- Translanguaging as a Practical Theory of Language(Li Wei, 2017, Applied Linguistics)
- Depth of Encoding Through Observed Gestures in Foreign Language Word Learning(Manuela Macedonia, Claudia Repetto, Anja Ischebeck, Karsten Mueller, 2019, Frontiers in Psychology)
- Learning a Second Language Naturally the Voice Movement Icon Approach(Manuela Macedonia, 2013, Journal of Educational and Developmental Psychology)
社会互动中的神经机制与多模态分析模型
这一组通过先进的分析手段(如脑间同步EEG、多模态学习分析MMLA)探讨互动中的底层生理和计算机制。内容涉及模仿行为中的大脑同步、情绪加工过程以及手势在交互学习环境中的测量模型。
- Inter-Brain Synchronization during Social Interaction(Guillaume Dumas, Jacqueline Nadel, Robert Soussignan, Jacques Martinerie, Line Garnero, 2010, PLoS ONE)
- A Measurement Model of Gestures in an Embodied Learning Environment: Accounting for Temporal Dependencies(Alejandro Andrade, Joshua Danish, Adam V. Maltese, 2017, Journal of Learning Analytics)
- The Emotion Process: Event Appraisal and Component Differentiation(Klaus R. Scherer, Agnes Moors, 2018, Annual Review of Psychology)
本组文献从理论基础、认知表征、个体发育、教学应用、二语习得及交互机制六个维度系统地论述了感觉运动理论与具身认知在教学(特别是手势教学)领域的应用。研究揭示了身体动作不仅是认知的结果,更是认知过程的有机组成部分,强调在教学实践中应充分利用手势、运动经验和沉浸式技术来促进学习者的深度理解和知识迁移。
总计36篇相关文献
Exposure to communicative gestures, through their parents' use of gestures, is associated with infants' language development. However, the mechanisms supporting this link are not fully understood. In adults, sensorimotor brain activity occurs while processing communicative stimuli, including both spoken language and gestures. Using electroencephalogram (EEG) mu rhythm desynchronization (mu ERD), a marker of sensorimotor activity, we examined whether experimental manipulation of infants' exposure to gestures would affect language development, and specifically whether such an effect would be mediated by changes in sensorimotor brain activity. Mu ERD was measured in 10- to 12-month-old infants (<i>N</i> = 81; 42 male; 15% Hispanic, 62% White) recruited from counties surrounding a large mid-Atlantic university while they observed an experimenter gesturing toward or grasping an object. Half of the infants were randomized to receive increased gesture exposure through a parent-directed training. All 81 infants provided behavioral (infant and parent pointing and infant vocabulary) data prior to intervention and 72 provided behavioral data postintervention. Forty-two infants provided usable (post artifact removal) EEG data prior to intervention and 40 infants provided usable EEG data post-intervention. Twenty-nine infants provided usable EEG data at both sessions. Increased parent gesture due to the intervention was associated with increased infant right lateralized mu ERD at follow-up, but only while observing the experimenter gesturing not grasping. Increased mu ERD, again only while observing the experimenter gesture, was associated with increased infant receptive vocabulary. This is the first evidence suggesting that increasing exposure to gestures may impact infants' language development through an effect on sensorimotor brain activity. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
No abstract
No abstract
No abstract
Word learning is basic to foreign language acquisition, however time consuming and not always successful. Empirical studies have shown that traditional (visual) word learning can be enhanced by gestures. The gesture benefit has been attributed to depth of encoding. Gestures can lead to depth of encoding because they trigger semantic processing and sensorimotor enrichment of the novel word. However, the neural underpinning of depth of encoding is still unclear. Here, we combined an fMRI and a behavioral study to investigate word encoding online. In the scanner, participants encoded 30 novel words of an artificial language created for experimental purposes and their translation into the subjects' native language. Participants encoded the words three times: visually, audiovisually, and by additionally observing semantically related gestures performed by an actress. Hemodynamic activity during word encoding revealed the recruitment of cortical areas involved in stimulus processing. In this study, depth of encoding can be spelt out in terms of sensorimotor brain networks that grow larger the more sensory modalities are linked to the novel word. Word retention outside the scanner documented a positive effect of gestures in a free recall test in the short term.
Interactive learning environments with body-centric technologies lie at the intersection of the design of embodied learning activities and multimodal learning analytics. Sensing technologies can generate large amounts of fine-grained data automatically captured from student movements. Researchers can use these fine-grained data to create a high-resolution picture of the activity that takes place during these student–computer interactions and explore whether the sequence of movements has an effect on learning. We present a use-case modelling of temporal data in an interactive learning environment with hand gestures, and discuss some validity threats if temporal dependencies are not accounted for. In particular, we assess how, if ignored, the temporal dependencies in the measurement of hand gestures might affect the goodness of fit of the statistical model and would affect the measurement of the similarity between elicited and enacted movement. Our findings show that accounting for temporality is crucial for finding a meaningful fit to the data. In using temporal analytics, we are able to create a high-resolution picture of how sensorimotor coordination correlates with learning gains in our learning system.
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.
No abstract
A central claim of theories of embodied cognition is that cognitive processes are rooted in the perceptions and actions of the body. In this chapter, we focus on three implications of this view for learning and instruction. First, action matters for cognitive performance and learning. Actions that are well aligned with target ideas can promote performance and learning, whereas actions that are not well aligned can interfere. Second, observing others’ actions can activate action-based knowledge; therefore, learners need not produce actions themselves in order for action to influence performance and learning. Third, imagining or mentally simulating actions can activate action-based knowledge, and simulated actions are sometimes manifested in gestures, which are a form of representational action. These principles highlight the importance of actions—both real and imagined—in cognitive performance, learning and instruction. We consider implications of this perspective for instructional design, assessment, and educational technology.
This article seeks to develop Translanguaging as a theory of language and discuss the theoretical motivations behind and the added values of the concept. I contextualize Translanguaging in the linguistic realities of the 21st century, especially the fluid and dynamic practices that transcend the boundaries between named languages, language varieties, and language and other semiotic systems. I highlight the contributions Translanguaging as a theoretical concept can make to the debates over the Language and Thought and the Modularity of Mind hypotheses. One particular aspect of multilingual language users' social interaction that I want to emphasize is its multimodal and multisensory nature. I elaborate on two related concepts: Translanguaging Space and Translanguaging Instinct, to underscore the necessity to bridge the artificial and ideological divides between the so-called sociocultural and the cognitive approaches to Translanguaging practices. In doing so, I respond to some of the criticisms and confusions about the notion of Translanguaging.
Representational co-speech gestures are generally assumed to be increasingly produced in more difficult compared with easier verbal tasks, as maintained by theories suggesting that gestures arise from processing difficulties during speech production. However, the gestures-as-simulated-action framework proposes that more representational gestures are produced with stronger rather than weaker mental representations that are activated in terms of mental simulation in the embodied cognition framework. We tested these two conflicting assumptions by examining verbal route descriptions that were accompanied by spontaneous directional gestures. Easy descriptions with strong activation were accompanied more often by gestures than difficult descriptions with weak activation. Furthermore, only gesture-speech matches-but not gesture-picture matches-were increasingly produced with difficult lateral directions compared with easy nonlateral directions. We argue that lateral gesture-speech matches underlie stronger activated mental representations in mental imagery. Thus, all results are in line with the gestures-as-simulated-action framework and provide evidence against the view that gestures result from processing difficulties.
Second language (L2) instruction greatly differs from natural input during native language (L1) acquisition.Whereas a child collects sensorimotor experience while learning novel words, L2 employs primarily reading,writing and listening and comprehension. We describe an alternative proposal that integrates the body into thelearning process: the Voice Movement Icon (VMI) approach. A VMI consists of a word that is read and spokenin L2 and synchronously paired with an action or a gesture. A VMI is first performed by the language trainer andthen imitated by the learners. Behavioral experiments demonstrate that words encoded through VMIs are easierto memorize than audio-visually encoded words and that they are better retained over time. The reasons whygestures promote language learning are manifold. First, we focus on language as an embodied phenomenon ofcognition. Then we review evidence that gestures scaffold the acquisition of L1. Because VMIs reconnectlanguage learning with the body, they can be considered as a more natural tool for language instruction thanaudio-visual activities.
According to the embodiment principle, students learn better when they engage in task-relevant sensorimotor experiences during learning, such as gesturing or manipulating objects. Students may benefit from enacting movements themselves and/or observing them performed by others. Embodied instruction supports learning by offloading thinking to the physical world (i.e., reduced cognitive load) and by drawing analogies between abstract concepts and meaningful actions (i.e., increased generative processing). Prior research has identified a wide range of promising embodiment methods – using gestures to represent math concepts or to trace important elements of diagrams; manipulating concrete (or virtual) objects to understand stories, math concepts, molecular structures, or physics principles; and designing visualizations that present lessons from the learner's perspective.
No abstract
Through constant exposure to adult input in interaction, children’s language gradually develops into rich linguistic constructions containing multiple cross-modal elements subtly used together for communicative functions. Sensorimotor schemas provide the "grounding" of language in experience and lead to children’s access to the symbolic function. With the emergence of vocal or signed productions, gestures do not disappear but remain functional and diversify in form and function as children become skilled adult multimodal conversationalists. This volume examines the role of gesture over the human lifespan in its complex interaction with speech and sign. Gesture is explored in the different stages before, during, and after language has fully developed and a special focus is placed on the role of gesture in language learning and cognitive development. Specific chapters are devoted to the use of gesture in atypical populations. CONTENTS Contributors Aliyah Morgenstern and Susan Goldin-Meadow 1 Introduction to Gesture in Language Part I: An Emblematic Gesture: Pointing Kensy Cooperrider and Kate Mesh 2 Pointing in Gesture and Sign Aliyah Morgenstern 3 Early Pointing Gestures Part II: Gesture Before Speech Meredith L. Rowe, Ran Wei, and Virginia C. Salo 4 Early Gesture Predicts Later Language Development Olga Capirci, Maria Cristina Caselli, and Virginia Volterra 5 Interaction Among Modalities and Within Development Part III: Gesture With Speech During Language Learning Eve V. Clark and Barbara F. Kelly 6 Constructing a System of Communication With Gestures and Words Pauline Beaupoil-Hourdel 7 Embodying Language Complexity: Co-Speech Gestures Between Age 3 and 4 Casey Hall, Elizabeth Wakefield, and Susan Goldin-Meadow 8 Gesture Can Facilitate Children’s Learning and Generalization of Verbs Part IV: Gesture After Speech Is Mastered Jean-Marc Colletta 9 On the Codevelopment of Gesture and Monologic Discourse in Children Susan Wagner Cook 10 Understanding How Gestures Are Produced and Perceived Tilbe Göksun, Demet Özer, and Seda AkbIyık 11 Gesture in the Aging Brain Part V: Gesture With More Than One Language Elena Nicoladis and Lisa Smithson 12 Gesture in Bilingual Language Acquisition Marianne Gullberg 13 Bimodal Convergence: How Languages Interact in Multicompetent Language Users’ Speech and Gestures Gale Stam and Marion Tellier 14 Gesture Helps Second and Foreign Language Learning and Teaching Aliyah Morgenstern and Susan Goldin-Meadow Afterword: Gesture as Part of Language or Partner to Language Across the Lifespan Index About the Editors
No abstract
No abstract
The most exciting hypothesis in cognitive science right now is the theory that cognition is embodied. Like all good ideas in cognitive science, however, embodiment immediately came to mean six different things. The most common definitions involve the straight-forward claim that "states of the body modify states of the mind." However, the implications of embodiment are actually much more radical than this. If cognition can span the brain, body, and the environment, then the "states of mind" of disembodied cognitive science won't exist to be modified. Cognition will instead be an extended system assembled from a broad array of resources. Taking embodiment seriously therefore requires both new methods and theory. Here we outline four key steps that research programs should follow in order to fully engage with the implications of embodiment. The first step is to conduct a task analysis, which characterizes from a first person perspective the specific task that a perceiving-acting cognitive agent is faced with. The second step is to identify the task-relevant resources the agent has access to in order to solve the task. These resources can span brain, body, and environment. The third step is to identify how the agent can assemble these resources into a system capable of solving the problem at hand. The last step is to test the agent's performance to confirm that agent is actually using the solution identified in step 3. We explore these steps in more detail with reference to two useful examples (the outfielder problem and the A-not-B error), and introduce how to apply this analysis to the thorny question of language use. Embodied cognition is more than we think it is, and we have the tools we need to realize its full potential.
Speakers unconsciously stage bodily displays of their experience and understanding of time. Their performance is based on a genuine “choreography of time” that determines the figures they trace and their occupation of conceptual space. The choreography may be observed, studied and eventually enhanced to create new embodied approaches to cognitive grammar. But the shift from spontaneous to controlled conceptual action is not so simple, as the present study reveals.Jean-Rémi LAPAIRE is professor of cognitive linguistics, language education and gesture studies at Université Bordeaux Montaigne, France. His current research is focused on the physicality of speech in relation to embodied social cognition. He has designed and tested multimodal learning environments where students are invited to use their sensorimotor abilities to engage in dynamic acts of observation and reenactment as they analyze human communication systems. He has built multidisciplinary partnerships with professional choreographers and dance theory specialists to explore the choreography of speech, i.e. how speakers use patterned moves to shape and display meanings in space. To quote this article Lapaire, Jean-Rémi. 2016. « The choreography of time : metaphor, gesture and construal ». In Gabriel, Rosangela.; Pelosi, Ana Cristina Linguagem e cognição: emergência e produção de sentidos. Florianópolis: Insular, 2016, pp. 217-34 ISBN 978-85-7474-952
During social interaction, both participants are continuously active, each modifying their own actions in response to the continuously changing actions of the partner. This continuous mutual adaptation results in interactional synchrony to which both members contribute. Freely exchanging the role of imitator and model is a well-framed example of interactional synchrony resulting from a mutual behavioral negotiation. How the participants' brain activity underlies this process is currently a question that hyperscanning recordings allow us to explore. In particular, it remains largely unknown to what extent oscillatory synchronization could emerge between two brains during social interaction. To explore this issue, 18 participants paired as 9 dyads were recorded with dual-video and dual-EEG setups while they were engaged in spontaneous imitation of hand movements. We measured interactional synchrony and the turn-taking between model and imitator. We discovered by the use of nonlinear techniques that states of interactional synchrony correlate with the emergence of an interbrain synchronizing network in the alpha-mu band between the right centroparietal regions. These regions have been suggested to play a pivotal role in social interaction. Here, they acted symmetrically as key functional hubs in the interindividual brainweb. Additionally, neural synchronization became asymmetrical in the higher frequency bands possibly reflecting a top-down modulation of the roles of model and imitator in the ongoing interaction.
Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to "hook up" to motor, perceptual, and affective experience.
No abstract
How do children (and indeed adults) understand the mind?In this paper we contrast two accounts.One is the view that the child's early understanding of mind is an implicit theory analogous to scientific theories, and changes in that understanding may be understood as theory changes.The second is the view that the child need not really understand the mind, in the sense of having some set of beliefs about it.She bypasses conceptual understanding by operating a working model of the mind and reading its output.Fortunately, the child has such a model easily available, as all humans do, namely her own mind.The child's task is to learn how to apply this model to predict and explain others' mental states and actions.This is accomplished by running simulations on her working model, that is observing the output of her own mind, given certain inputs, and then applying the results to others.The first position has a certain prominence; research on children's understanding of mind has come to be called 'children's theory of mind'.This position is linked to certain philosophers of mind such as Churchland (1984) and Stich (1983) who characterize ordinary understanding of mind, our mentalistic folk psychology, as a theory.It is also part of a recent tendency to describe cognitive development as analogous to theory change in science (
This article explores relevant applications of educational theory for the design of immersive virtual reality (VR). Two unique attributes associated with VR position the technology to positively affect education: (1) the sense of presence, and (2) the embodied affordances of gesture and manipulation in the 3rd dimension. These are referred to as the two profound affordances of VR. The primary focus of this article is on the embodiment afforded by gesture in 3D for learning. The new generation of hand controllers induces embodiment and agency via meaningful and congruent movements with the content to be learned. Several examples of gesture-rich lessons are presented. The final section includes an extensive set of design principles for immersive VR in education, and finishes with the <i>Necessary Nine</i> which are hypothesized to optimize the pedagogy within a lesson.
This project collected linguistic data for spatial relations across a typologically and genetically varied set of languages.In the linguistic analysis, we focus on the ways in which propositions may be functionally equivalent across the linguistic communities while nonetheless representing semantically quite distinctive frames of reference.Running nonlinguistic experiments on subjects from these language communities, we find that a population's cognitive frame of reference correlates with the linguistic frame of reference within the same referential domain.*INTRODUCTION.This study examines the relationship between language and cognition through a crosslinguistic and crosscultural study of spatial reference.Beginning with a crosslinguistic survey of spatial reference in language use, we find systematic variation that contradicts usual assumptions about what must be universal.However, the available number of general spatial systems for describing spatial arrays can be sorted into a few distinctive frames of reference.We focus on two frames of reference: the ABSOLUTE, based on fixed bearings such as north and south, and the RELATIVE, based on projections from the human body such as 'in front (of me)', 'to the left'.In assessing language use, it is not enough to rely on descriptions of languages that are based on conventional elicitation techniques as these may not fully reflect actual socially anchored conventions.We have developed and used director/matcher language games which facilitate interactive discourse between native speakers about spatial relations in tabletop space.The standardized nature of these games allows more exact comparison across languages than is usually possible with conventionally collected discourse.Having observed the variation of language use across communities, we further ask whether there is corresponding conceptual variation-the question of the linguistic relativity of thought.For this, we developed nonlinguistic experiments to determine the speaker's cognitive representations independently of the linguistic data collection.The findings from these experiments clearly demonstrate that a community's use of linguistic coding reliably correlates with the way the individual conceptualizes and memorizes spatial distinctions for nonlinguistic purposes.Because we find linguistic relativity effects in a domain that seems basic to human experience and is directly linked to universally shared perceptual mechanisms, it seems likely that similar correlations between language and thought will be found in other domains as well.1.A CROSSLINGUISTIC AND CROSSCULTURAL STUDY OF SPATIAL REFERENCE.The primary goal of our project is to test, refine, and reformulate hypotheses about language and human cognition drawing on in-depth information from a broad sample of non-* This article developed from a presentation entitled 'Cultural variation in spatial conceptualization'
Much emotion research has focused on the end result of the emotion process, categorical emotions, as reported by the protagonist or diagnosed by the researcher, with the aim of differentiating these discrete states. In contrast, this review concentrates on the emotion process itself by examining how ( a) elicitation, or the appraisal of events, leads to ( b) differentiation, in particular, action tendencies accompanied by physiological responses and manifested in facial, vocal, and gestural expressions, before ( c) conscious representation or experience of these changes (feeling) and ( d) categorizing and labeling these changes according to the semantic profiles of emotion words. The review focuses on empirical, particularly experimental, studies from emotion research and neighboring domains that contribute to a better understanding of the unfolding emotion process and the underlying mechanisms, including the interactions among emotion components.
An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.
手势追踪效应是指在学习过程中使用手势追踪有助于提升学习效果的现象。对于这一效应,可以从认知负荷理论、具身认知和ICAP理论三个视角来进行解释。根据认知负荷理论,手势追踪可以帮助心理表征组块和充当注意引导的线索,促进认知资源优化。从具身认知角度看,手势追踪不仅可以激活多种工作记忆通道,促进心理表征的构建,还可以帮助将认知工作转移到环境中,促进图式的构建和学习效果的提升。根据ICAP理论,手势追踪本质上是一种对学习材料的主动操控,它有助于激活学习者的相关背景知识,从而促进认知加工。目前,关于手势追踪效应的研究主要聚焦在数学学习领域,对于该效应发生的边界条件尚需进一步探究;对于观看他人手势追踪与学习者自己进行手势追踪对学习产生的不同影响,也有待深入探讨。关于手势追踪效应的未来研究,应重点考虑这种效应发生的边界条件、潜在心理机制、与电子学习媒介的交互以及在真实课堂情境中的应用等四个方面。
近年来,学习和教育的研究越来越受到认知理论的影响。基于具身认知理论的具身学习也逐步实施到了课堂上,相对于传统的教师在课堂上传授学生教学内容,让学生安分守己,坐在固定的座位上认真听讲,具身学习的课堂更多的强调是学生、教师和环境的交互作用的结果。如何发挥学生身体在学习过程中的作用,如何改进教师的教学方法,以及现阶段如何利用教学设备辅助教学,都将影响学生在课堂上对所学内容的体会、理解和掌握。
人类的认知驱动着身体的动作,同样身体感觉运动功能也对认知和情感功能有着重要的影响。本文结合近年来针对面部、身体、语言方面的具身认知研究,从心理内在机制、社会应用等多方面展开探讨具身认知理论的形成及其发展,并进一步探索具身效应对儿童、青少年成长方面的影响以及在实践教学领域所发挥的促进作用。
感觉运动经验是认知加工的重要基础,直接影响儿童对概念的获取。文章在回顾了传统认知理论和具身认知理论区别的基础上,针对感觉运动系统的在儿童语言习得中的作用进行了分析;然后,系统梳理了感觉运动经验与儿童概念习得、语言加工等的相关研究,得出环境、感知觉、行为是儿童初期获得概念进行识别的重要组成部分,儿童早期感觉运动经验对后期语言发展具有重要意义。最后指出未来研究需要提高感觉运动系统与儿童语言加工的信效度;对缺乏感觉运动经验的儿童如何有效促进概念获得等问题需进一步深入研究。
当前的教学情境存在离身化倾向,深度学习的主体学习源起、高阶思维过程、知识迁移目标呼唤着互动的、批判的、创造的具身教学情境。在内部动机为驱使、个人化知识为中心、“从做中学”为导向的身体主体性、个体性与整体性理念支撑下,教师在知识积累的基础上,通过知识转化及其动态加工,推动具身教学情境的构建。教学情境中身体的回归需要借助真实性、互动性、整体性的情境,调动主动学习兴趣,引发深度思考交流,强化知识整合应用,为深度学习的落实提供全方位、多层次、宽领域的实践路径。
亲子互动中指令行为是十分常见的社会行为,婴幼儿的指令行为能力对婴幼儿语言能力、认知能力及互动能力具有重要的影响。本研究基于两位汉语为母语的婴幼儿约9个小时的家庭亲子互动录像(6月~1岁10月),运用会话分析的研究方法,转写并析出169例婴幼儿指令的会话片段。在此基础上,描写婴幼儿在前语言阶段,执行指令行为的多模态互动资源。研究发现,婴幼儿执行指令的互动资源包括眼神、手势、身势。虽然该年龄段的婴幼儿执行指令行为的复杂程度不高,但是在家长的配合下,婴幼儿已然能够成功完成基本的指令行为。当婴幼儿指令行为并未得到及时回应时,8月龄的婴幼儿已经能够扩展指令序列,协助成人完成对其指令行为的识解。本文对汉语婴幼儿指令行为的研究具有一定的借鉴价值,对亲子互动的研究具有一定的启示。
文章基于多模态表征理论、认知负荷理论和建构主义学习理论,构建了数学分析可视化教学的理论框架,并设计了两个典型教学案例,分别针对函数极限的ε-δ定义和定积分概念。通过整合GeoGebra等动态可视化技术,探讨如何在保持数学严谨性的同时提升教学直观性,帮助学生完成从直观认知到形式化理解的思维过渡。可视化教学能够显著降低抽象概念的认知负荷,促进学生对数学概念的深度理解。本研究为数学分析教学改革提供了创新路径,对推动数学教育创新发展具有参考价值。
20世纪末,隐喻不再被视作简单的语言修饰物,我们的概念系统具有隐喻性,研究者试图去解释与论证隐喻如何塑造思维、构建感知、影响行为,对其的研究方兴未艾。隐喻的研究不再停留于语言层面,学者们逐渐发现手势也具备隐喻性,手势与语言、认知的关系十分紧密。然而,尽管手势在生活中极其普遍,但其真正受到关注是在20世纪末,国内的研究还在起步阶段。本文探讨手势和语言的关系,以及手势与认知的交互作用,以隐喻手势为主进行综述性分析。
儿童词汇理解与表达能力的发展是语言习得过程中的重要环节,对儿童的认知、社交和学习能力具有深远影响。近年来,随着发展心理学、神经科学和语言学等领域的进展,研究者们逐渐揭示了影响儿童词汇发展的多种因素。本综述旨在系统梳理这些影响因素,重点探讨生物发育因素、家庭语言输入以及社会文化因素对儿童词汇能力发展的影响。生物发育因素包括儿童的运动发展、出生体重、早期词汇处理效率等,家庭语言输入则主要涉及共读行为、互动质量、语音特征及环境噪音等。社会文化因素如语言类型、双语环境、母亲教育水平及民族文化认同等也显著影响词汇发展的进程。本综述不仅为儿童语言发展研究提供理论基础,也为实际语言教育工作提供指导,尤其是在不同文化和社会背景下,帮助更全面地理解儿童语言能力的促进路径。
本组文献从理论基础、认知表征、个体发育、教学应用、二语习得及交互机制六个维度系统地论述了感觉运动理论与具身认知在教学(特别是手势教学)领域的应用。研究揭示了身体动作不仅是认知的结果,更是认知过程的有机组成部分,强调在教学实践中应充分利用手势、运动经验和沉浸式技术来促进学习者的深度理解和知识迁移。