Embodied Learning
具身认知的理论框架、神经机制与符号接地
该组文献探讨了具身认知的底层逻辑,包括4E认知(具身、嵌入、生成、扩展)、生成论(Enactivism)哲学、意象图式以及神经生物学约束下的计算建模。研究重点在于解决‘符号接地问题’,即抽象符号如何通过感觉运动经验获得意义,并探讨了大脑感觉运动网络在学习中的协同机制。
- Modes of Meaning(James Smith-Harvey, Claudio Aguayo, 2024, Pacific Journal of Technology Enhanced Learning)
- Mathematical Modeling of Embodied Cognition and Personalized Learning in Interactive Virtual Reality: A Nonlinear Systems Approach(Huaqun Liu, 2024, Applied Mathematics and Nonlinear Sciences)
- Image Schemas and Conceptual Dependency Primitives: A Comparison(J. Macbeth, Dagmar Gromann, Maria M. Hedblom, 2017)
- A Neurobiologically Constrained Cortex Model of Semantic Grounding With Spiking Neurons and Brain-Like Connectivity(Rosario Tomasello, M. Garagnani, T. Wennekers, F. Pulvermüller, 2018, Frontiers in Computational Neuroscience)
- Developmental Autonomous Behavior: An Ethological Perspective to Understanding Machines(Satoru Isaka, 2023, IEEE Access)
- Making the Environment an Informative Place: A Conceptual Analysis of Epistemic Policies and Sensorimotor Coordination(G. Pezzulo, S. Nolfi, 2019, Entropy)
- Minds in movement: embodied cognition in the age of artificial intelligence(Louise Barrett, Dietrich Stout, 2024, Philosophical Transactions of the Royal Society B: Biological Sciences)
- Enactivism and Digital Learning Platforms(Magda Pischetola, L. Dirckinck-Holmfeld, 2024, Networked Learning Conference)
- Three Perspectives on Embodied Learning in Virtual Reality: Opportunities for Interaction Design(Julia Chatain, Manu Kapur, R. Sumner, 2023, Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems)
- The Realistic Challenges and Road Choices of Primary and Secondary School Teachers in the Era of Artificial Intelligence: From the Perspective of Embodied Cognition Theory(Long Fei, 2025, Applied & Educational Psychology)
- Understanding Attention: In Minds and Machines(S. P. Sawant, Shruti Singh, 2020, ArXiv)
- We can teach more than we can tell: combining Deliberate Practice, Embodied Cognition, and Multimodal Learning(Bibeg Limbu, Gitte van Helden, J. Schneider, M. Specht, 2022, No journal)
- Embodied Space in Natural and Virtual Environments: Implications for Cognitive Neuroscience Research(F. Morganti, 2015, No journal)
- A Network Perspective on Sensorimotor Learning.(H. Sohn, Nicolas Meirhaeghe, R. Rajalingham, Mehrdad Jazayeri, 2020, Trends in neurosciences)
- Symbol Grounding Without Direct Experience: Do Words Inherit Sensorimotor Activation From Purely Linguistic Context?(F. Günther, Carolin Dudschig, Barbara Kaup, 2018, Cognitive science)
- Grounding for Artificial Intelligence(Bing Liu, 2023, ArXiv)
- An integrated account for technological cognition.(Giovanni Federico, François Osiurak, M. Brandimonte, Paola Marangolo, C. Ilardi, 2025, Cognitive neuroscience)
- Embodied Cognition, Body Coherence, and Immersion in Non-Euclidean Virtual Reality(Maya Gibson, 2024, Proceedings of the 16th Conference on Creativity & Cognition)
- Mathematical Cognition as Embodied Simulation(Firat Soylu, 2011, Cognitive Science)
- [radical] signals from life: from muscle sensing to embodied machine listening/learning within a large-scale performance piece(D. Nort, 2015, Proceedings of the 2nd International Workshop on Movement and Computing)
- Metaphors Are Projected Constraints on Action: An Ecological Dynamics View on Learning Across the Disciplines(Dor Abrahamson, R. Sánchez-García, Clifford Smyth, 2016)
- Crowdsourcing Image Schemas(Dagmar Gromann, J. Macbeth, 2018)
沉浸式技术(XR)驱动的具身学习环境设计与实证
此类研究聚焦于利用VR、AR、MR及元宇宙技术构建沉浸式学习环境。研究重点包括:如何通过多感官交互和第一人称视角降低认知负荷;沉浸感与临场感对科学、地理、数学等学科学习成效的影响;以及解决技术摩擦、晕动症等影响具身体验的工程设计原则。
- Ease over effort: achieving consistent learning outcomes through a more relaxed approach in immersive virtual reality for hands-on education(Xinyan Wang, Yen Hsu, Rui Xu, Zengqiang Zhang, Dawei Fan, 2025, Interactive Learning Environments)
- The Effect of Virtual Reality Embodied Degree on Students’ Engagement and Cognitive Load(Di Ai, J. Xiao, Li Xing, T. Li, G. Yang, G. Fang, 2025, 2025 7th International Conference on Computer Science and Technologies in Education (CSTE))
- The design of immersive English learning environment using augmented reality(Kuo-Chen Li, C. Tsai, Cheng-Ting Chen, Shein-Yung Cheng, J. Heh, 2015, 2015 8th International Conference on Ubi-Media Computing (UMEDIA))
- Design of Immersive Virtual Reality (VR) Classroom Based on Embodied Cognition Theory and Analysis of Cognitive Transfer Effects(Yagang Mao, 2025, Academic Conferences Series)
- An Educational Virtual Reality Game from an Embodied Cognitive Perspective -Take the Game Poetry as an Example(Zihan Kong, 2023, Lecture Notes in Education Psychology and Public Media)
- Immersive Fire Training with Live Expert Support and Embodied Interaction with Real Extintor(Lucia Tallero, Ester González-Sosa, Baldomero Rodriguez-Arbol, Á. Villegas, 2025, 2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW))
- Learning With Media(Sarah Lewis, Robb Lindgren, Shuai Wang, R. Pea, 2019, J. Media Psychol. Theor. Methods Appl.)
- The Construction of Metaverse Precision Teaching Fields from the Perspective of Embodied Cognition(Hongbing He, 2025, Contemporary Education Frontiers)
- Student informed development of virtual reality simulations for teaching and learning in the molecular sciences(F. Reen, Owen Jump, Grace McEvoy, Brian P Mcsharry, J. Morgan, David Murphy, Niall O’Leary, Billy O’Mahony, Martina Scallan, Christine Walsh, Briony Supple, 2024, Journal of Biological Education)
- Design strategies for VR science and education games from an embodied cognition perspective: a literature-based meta-analysis(Xiuyu Lin, Runbo Li, Zhirong Chen, Jiayi Xiong, 2024, Frontiers in Psychology)
- Fostering Penetrative Thinking in Geosciences Through Immersive Experiences: A Case Study in Visualizing Earthquake Locations in 3D(M. Bagher, P. Sajjadi, J. Carr, P. L. Femina, A. Klippel, 2020, 2020 6th International Conference of the Immersive Learning Research Network (iLRN))
- Comparing cognitive load in learning spatial ability: immersive learning environment vs. digital learning media(Yi Jian, Juliana Aida Abu Bakar, 2024, Discover Sustainability)
- Evidence for embodied cognition in immersive virtual environments using a second language learning environment(J. Ratcliffe, L. Tokarchuk, 2020, 2020 IEEE Conference on Games (CoG))
- Augmented reality and embodied learning: effects of embodiment degrees on students’ learning achievement, cognitive load, and technology acceptance(Xiaochen Suo, Baoyuan Yin, Xiaoyan Feng, 2026, Frontiers in Psychology)
- Remediating learning from non-immersive to immersive media: Using EEG to investigate the effects of environmental embeddedness on reading in Virtual Reality(Sarune Baceviciute, Thomas Terkildsen, G. Makransky, 2021, Comput. Educ.)
- Three Virtual Reality Environments for the Assessment of Executive Functioning Using Performance Scores and Kinematics: An Embodied and Ecological Approach to Cognition(Nicolas Ribeiro, Toinon Vigier, Jieun Han, G. Kwon, Hojin Choi, Samuel Bulteau, Yannick Prié, 2024, Cyberpsychology, Behavior, and Social Networking)
- Virtual Reality and Embodied Learning: Unraveling the Relationship via Dynamic Learner Behavior(Antony Prakash, Ramkumar Rajendran, 2023, International Conference on Computers in Education)
- Embodied Learning in Virtual Reality: Comparing Direct and Indirect Interaction Effects on Educational Outcomes(M. Dastmalchi, Amir Goli, 2024, 2024 IEEE Frontiers in Education Conference (FIE))
- Immersive Virtual Reality Environments for Embodied Learning of Engineering Students(R. Pérez, Özgür Keles, 2025, ArXiv)
- Developing Design Principles for an Augmented Paper-Based Mathematics Learning Environment from an Embodied Cognition Perspective: Focusing on the Exploration of the Volume of a Sphere(Hyowon Wang, Yunjoo Yoo, 2025, The Korean Society of Educational Studies in Mathematics - School Mathematics)
- Research and design of virtual simulation experiment learning interaction for embodied cognition(Chen Wang, Fuan Wen, 2020, 2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA))
- Data-Driven Human Factors Integration in Architectural Education: A VR-Based Experimental Course Framework for Mitigating Cybersickness via Embodied Cognition Enhancement(Cuina Zhang, Jiaxin Cao, Ming Wu, Jiayun Wu, 2025, Journal of Natural Science Education)
- Moments of friction in virtual reality: How feeling histories impact experience(Ty Hollett, Siyuan Luo, N. Turcotte, Crystal M. Ramsay, Chris Stubbs, Zachary E. Zidik, 2019, E-Learning and Digital Media)
- Learning in embodied activity framework: a sociocultural framework for embodied cognition(Joshua A. Danish, Noel Enyedy, Asmalina Saleh, Megan Humburg, 2020, International Journal of Computer-Supported Collaborative Learning)
- A Study on the Development and Application of Embodied Cognition-Based Elementary English Learning Activities: Effective Utilization of ChatGPT as an Intelligent Personal Assistant(Soomin Lee, Deokgi Min, 2024, The Korea English Language Testing Association)
- An Augmented Reality Job Performance Aid for Kinaesthetic Learning in Manufacturing Work Places(Fridolin Wild, Peter Scott, J. Karjalainen, K. Helin, Salla Lind-Kohvakka, Ambjörn Naeve, 2014, No journal)
- Research on Strategies for Enhancing Immersive Synesthetic Experience in 4D New Media Interactive Animation from the Perspective of Embodied Cognition(Xuejiao He, 2023, Proceedings of the 2023 International Conference on Electronics, Computers and Communication Technology)
- The Impact of AR Embodied Gaming and 2D Gaming on Children’s Social Cognition and Learning Experience: An fNIRS Controlled Trial(Danyang Hu, Song Hao, Zhenya Wang, Yuhan Zhang, Jianing Wang, 2025, International Journal of Human–Computer Interaction)
- The Differential Effects of Augmented Reality and Product Presentation Strategies on Brand Recall: An Embodied Cognition Perspective(Liying Zhou, Limin Niu, Taiyang Zhao, 2024, J. Theor. Appl. Electron. Commer. Res.)
具身智能、大语言模型与人机协同学习
该组文献探讨了具身人工智能(Embodied AI)与人类认知的融合。研究涵盖了:评估大语言模型(LLMs)在缺乏物理实体时的语义理解局限;通过多模态反馈(视觉、触觉)增强AI代理的具身能力;以及机器人辅助教学(如舞蹈、实验)中的物理人机交互(pHRI)与强化学习框架。
- A sensorimotor reinforcement learning framework for physical Human-Robot Interaction(Ali Ghadirzadeh, Judith Bütepage, A. Maki, D. Kragic, Mårten Björkman, 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS))
- Predictive Learning of Error Recovery with a Sensorized Passivity‐Based Soft Anthropomorphic Hand(Kieran Gilday, T. G. Thuruthel, F. Iida, 2023, Advanced Intelligent Systems)
- Social Cognition in the Age of Human-Robot Interaction.(Anna Henschel, R. Hortensius, Emily S. Cross, 2020, Trends in neurosciences)
- Mutual human-robot understanding for a robot-enhanced society: the crucial development of shared embodied cognition(G. Sandini, A. Sciutti, Pietro Morasso, 2025, Frontiers in Artificial Intelligence)
- Dance Teaching by a Robot: Combining Cognitive and Physical Human–Robot Interaction for Supporting the Skill Learning Process(Diego Felipe Paez Granados, Breno A. Yamamoto, H. Kamide, J. Kinugawa, K. Kosuge, 2017, IEEE Robotics and Automation Letters)
- Eye-Tracking in Physical Human–Robot Interaction: Mental Workload and Performance Prediction(Satyajit Upasani, Divya Srinivasan, Qi Zhu, Jing Du, Alexander Leonessa, 2023, Human Factors)
- Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies(T. Ziemke, 2023, Artificial Life)
- Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition(Franccois Olivier, Zied Bouraoui, 2025, ArXiv)
- ENACT: Evaluating Embodied Cognition with World Modeling of Egocentric Interaction(Qineng Wang, Wenlong Huang, Yu Zhou, Hang Yin, Tianwei Bao, J. Lyu, Weiyu Liu, Ruohan Zhang, Jiajun Wu, Fei-Fei Li, Manling Li, 2025, ArXiv)
- Language writ large: LLMs, ChatGPT, meaning, and understanding(S. Harnad, 2025, Frontiers in Artificial Intelligence)
- Large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts(Qihui Xu, Yingying Peng, Samuel A. Nastase, Martin Chodorow, Minghua Wu, Ping Li, 2025, Nature Human Behaviour)
- TALON: Improving Large Language Model Cognition with Tactility-Vision Fusion(Xinyi Jiang, Guoming Wang, Huanhuan Li, Qinghua Xia, Rongxing Lu, Siliang Tang, 2024, 2024 IEEE 19th Conference on Industrial Electronics and Applications (ICIEA))
- Integrating feedback mechanisms and ChatGPT for VR-based experiential learning: impacts on reflective thinking and AIoT physical hands-on tasks(Wei-Sheng Wang, Chia-Ju Lin, Hsin‐Yu Lee, Yuen-Min Huang, Ting‐Ting Wu, 2024, Interactive Learning Environments)
- Embodied Agents and the Predictive Elaboration Model of Persuasion--The Ability to Tailor Embodied Agents to Users' Need for Cognition(Matthew D. Pickard, Jeffrey L. Jenkins, J. Nunamaker, 2012, 2012 45th Hawaii International Conference on System Sciences)
- Bridging embodied cognition and AI: Agentive Cognitive Construction Grammar as a backing theory for neuro-symbolic AI(S. Torres-Martínez, 2025, AI & SOCIETY)
- Aligning AIED Systems to Embodied Cognition and Learning Theories(Ivon Arroyo, Injila Rasul, D. Crabtree, F. Castro, Allison Poh, S. Gattupalli, William Lee, Hannah Smith, Matthew Micciolo, 2024, No journal)
- ECBench: Can Multi-modal Foundation Models Understand the Egocentric World? A Holistic Embodied Cognition Benchmark(Ronghao Dang, Yuqian Yuan, Wenqiao Zhang, Yifei Xin, Boqiang Zhang, Long Li, Liuyi Wang, Qinyang Zeng, Xin Li, Li Bing, 2025, 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR))
- InteractML: Making machine learning accessible for creative practitioners working with movement interaction in immersive media(Clarice Hilton, N. Plant, Carlos González Díaz, Phoenix Perry, Ruth Gibson, B. Martelli, Michael Zbyszynski, R. Fiebrink, M. Gillies, 2021, Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology)
- Movement interaction design for immersive media using interactive machine learning(N. Plant, Ruth Gibson, Carlos González Díaz, B. Martelli, Michael Zbyszynski, R. Fiebrink, M. Gillies, Clarice Hilton, Phoenix Perry, 2020, Proceedings of the 7th International Conference on Movement and Computing)
- Embodied Cognition: Redefining Human Experience Through AI and Extended Reality(Murali Krishna Pasupuleti, 2025, International Journal of Academic and Industrial Research Innovations(IJAIRI))
- An Art-Science Perspective on Artificial Intelligence Creativity: From Problem Finding to Materiality and Embodied Cognition(R. Root-Bernstein, 2025, Journal of Creativity)
手势、动作捕捉与感知运动系统的认知表征
这组文献研究身体动作(尤其是手势)如何作为思维的物理延伸。内容涉及:手势在空间思维、语言习得(如第二语言、手语)和抽象概念(如生物化学模型、音乐旋律)表征中的作用;以及利用动作捕捉技术评估运动系统对创造性思维和知识内化的贡献。
- Spatial Cognition Through Gestural Interfaces: Embodied Play and Learning with Minecraft(J. Issa, Vishesh Kumar, Marcelo Worsley, 2024, No journal)
- Composing Understandings: music, motion, gesture and embodied cognition(Guilherme Bertissolo, 2019, No journal)
- Exploring the Correspondence of Melodic Contour With Gesture in Raga Alap Singing(Shreyas Nadkarni, S. Roychowdhury, P. Rao, M. Clayton, 2023, No journal)
- Exploring Undergraduate Biochemistry Students’ Gesture Production Through an Embodied Framework(Lora Randa, Song Wang, Zoe Poolos, Vanna Figueroa, A. Bridgeman, Thomas J. Bussey, Rou-Jia Sung, 2024, CBE Life Sciences Education)
- Communicative intent modulates production and comprehension of actions and gestures: A Kinect study.(J. Trujillo, I. Simanova, H. Bekkering, A. Özyürek, 2018, Cognition)
- Exploring and enhancing spatial thinking skills: Learning differences of university students within a web-based GIS mapping environment(X. Xiang, Yan Liu, 2018, Br. J. Educ. Technol.)
- Graph hopping: learning through physical interaction quantification(Timothy Charoenying, 2013, Proceedings of the 12th International Conference on Interaction Design and Children)
- Increased Neural Strength and Reliability to Audiovisual Stimuli at the Boundary of Peripersonal Space(Jean-Paul Noel, A. Serino, M. Wallace, 2019, Journal of Cognitive Neuroscience)
- Actions and Interactions at Collaborative Engineering Design Hackathon: Looking Through the Lens of Embodied Cognition(Soumya Narayanan, Navneet Kaur, Rwitajit Majumdar, 2024, International Conference on Computers in Education)
- Perceptual Biases in Multiview Navigation: Insights from Embodied Spatial Cognition and Mental Rotation(Alston Lantian Xu, Álvaro Cassinelli, 2025, Proceedings of the 2025 Conference on Creativity and Cognition)
- The role of the motor system in generating creative thoughts(Heath E. Matheson, Yoed N. Kenett, 2020, NeuroImage)
- Capturing Movement: A Tablet App, Geometry Touch, for Recording Onscreen Finger-Based Gesture Data(Stoo Sepp, Sharon K Tindall-Ford, Shirley Agostinho, F. Paas, 2024, IEEE Transactions on Learning Technologies)
- The Cognitive Affective Model of Motion Capture Training: A Theoretical Framework for Enhancing Embodied Learning and Creative Skill Development in Computer Animation Design(Xinyi Jiang, Zainuddin Ibrahim, Jing Jiang, Jiafeng Wang, Gang Liu, 2026, Computers)
- A Gesture Recognition Method Based on Spiking Neural Networks for Cognition Development(D. Niu, Dengju Li, Rui Yan, Huajin Tang, 2018, No journal)
- Semantically Related Gestures Move Alike: Towards a Distributional Semantics of Gesture Kinematics(W. Pouw, J. D. Wit, S. Bögels, Marlou Rasenberg, B. Milivojevic, A. Özyürek, 2021, No journal)
- A Career Dedicated to Gesture, Language, Learning, and Cognition: Susan Goldin-Meadow, 2021 Recipient of the Rumelhart Prize(M. Alibali, S. Cook, 2025, Topics in cognitive science)
- Transfer learning approaches in deep learning for Indian sign language classification(Tuhina Sheryl Abraham, S. P. Sachin Raj, A. Yaamini, B. Divya, 2022, Journal of Physics: Conference Series)
- Interaction and reflection via 3D path shape qualities in a mediated constructive learning environment(Kai Tu, H. Thornburg, E. Campana, David Birchfield, Matthew Fulmer, A. Spanias, 2007, No journal)
- Speaking without Thinking: Embodiment, Speech Technology and Social Signal Processing(T. Rohrer, 2010, No journal)
- Point-light Talkers: Multisensory Enhancement of Speech Tracking by Co-speech Movement Kinematics(Jacob P Momsen, Seana Coulson, 2025, Journal of cognitive neuroscience)
- M2ConceptBase: A Fine-grained Aligned Multi-modal Conceptual Knowledge Base(Zhiwei Zha, Jiaan Wang, Zhixu Li, Xiangru Zhu, Wei Song, Yanghua Xiao, 2023, ArXiv)
- Obtaining Discriminative Colour Names According to the Context: Using a Fuzzy Colour Model and Probabilistic Reference Grounding(Zoe Falomir, Vicent Costa, L. G. Abril, 2019, Int. J. Uncertain. Fuzziness Knowl. Based Syst.)
- A Systematic Investigation of Gesture Kinematics in Evolving Manual Languages in the Lab(W. Pouw, Mark Dingemanse, Yasamin Motamedi, A. Özyürek, 2021, Cognitive Science)
- Kin Cognition and Communication: What Talking, Gesturing, and Drawing About Family Can Tell us About the Way We Think About This Core Social Structure(Simon Devylder, Jennifer Hinnell, Joost van de Weier, Linea Brink Andersen, Lucie Laporte-Devylder, Heron Ken Tomaki Kulukul, 2024, Cognitive science)
- What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures.(R. Raghavan, Limor Raviv, David Peeters, 2023, Cognition)
- NEURO-ECOLOGICAL DYNAMICS IN DIGITAL SECOND LANGUAGE ACQUISITION: INVESTIGATING THE COGNITIVE–ENVIRONMENTAL INTERPLAY IN TECHNOLOGY-MEDIATED LEARNING(Prof. Christopher Andrew Langat, 2025, International Journal of Language, Linguistics, Literature and Culture)
跨学科实践、特殊教育与文化传承中的具身应用
这组文献展示了具身学习在多元领域中的落地应用。包括:针对自闭症、学习障碍儿童的干预训练;在医学、法律、思政、非遗保护(如京剧、龙舟)中的具身教学设计;以及结合可穿戴设备、游戏化手段进行的健康促进与康复训练(如中风康复、老年人健身)。
- The Autism Open Clinical Model (A.-O.C.M.) as a Phenomenological Framework for Prompt Design in Parent Training for Autism: Integrating Embodied Cognition and Artificial Intelligence(Flavia Morfini, Sebastian G. D. Cesarano, 2025, Brain Sciences)
- An Embodied Cognition Approach to Digital Interaction Design for Shaanxi Intangible Cultural Heritage(Bo Wu, Yi Wang, 2025, Proceedings of the 2025 International Conference on Generative AI and Digital Media Arts)
- Immersion and Resonance: Constructing an Embodied Cognition Teaching Model for Ideological Education Empowered by VR Technology(Zhengyi Zhang, 2025, Education Reform and Development)
- Integration of Digital Media in Dance Education: New Media as Catalysts for Educational Innovation(Linan Liu, Keyu Shi, 2025, Global Review of Humanities, Arts, and Society)
- Research on the Innovation of Law Case Teaching Mode Assisted by Artificial Intelligence-Paradigm Reconstruction Based on Embodied Cognition Theory(Xing Liu, XiaoYan Yang, 2025, Higher Education and Practice)
- Implementation of Visual, Auditory, Kineshthetic, Tactile Model Learning System to Help Mild Retarded Children in Alphabetical and Numeric Learning(R D Agustia, I N Arifin, 2018, IOP Conference Series: Materials Science and Engineering)
- Let's Play a Game! Kin-LDD: A Tool for Assisting in the Diagnosis of Children with Learning Difficulties(E. Chatzidaki, M. Xenos, Charikleia Machaira, 2019, Multimodal Technol. Interact.)
- Empirical Study on the Embodied Cognition Effect of AI Educational Tools: The Relationship between Learners' Physical Engagement and Deep Learning Outcomes(Yuelin Ding, Lyu Huimin, Dingding Guo, Lan Li, 2025, 2025 International Conference on Distance Education and Learning (ICDEL))
- Development and Evaluation of Online Approaches for Improved Kinaesthetic Learning in Science(Anna Scanlan, D. Kennedy, Tommie V. McCarthy, 2021, 7th International Conference on Higher Education Advances (HEAd'21))
- Enactivism and Game Authoring: A study of Teacher Online Experience from Sociocultural Perspectives(Qing Li, Joseph Runciman, 2022, Journal of Educational Thought / Revue de la Pensée Educative)
- Affording embodied cognition through touchscreen and above-the-surface gestures during collaborative tabletop science learning(Nikita Soni, Alice-Ann Darrow, Annie Luc, Schuyler Gleaves, Carrie Schuman, Hannah Neff, Peter Chang, Brittani Kirkland, Jeremy Alexandre, Amanda Morales, K. Stofer, Lisa Anthony, 2021, International Journal of Computer-Supported Collaborative Learning)
- Enhancing E-Learning Experience Through Embodied AI Tutors in Immersive Virtual Environments: A Multifaceted Approach for Personalized Educational Adaptation(Fatemeh Sarshartehrani, Elham Mohammadrezaei, Majid Behravan, Denis Gračanin, 2024, No journal)
- Affordances of Physical Objects in Computing Instruction from an Embodied Cognition Perspective(Colleen M. Lewis, Robb Lindgren, 2025, ACM Transactions on Computing Education)
- A Concept of Visual Programming Tool for Learning VHDL(A. Ivanova, 2021, IOP Conference Series: Materials Science and Engineering)
- Embodied learning in a virtual mathematics classroom: an example lesson(C. Smith, 2023, International Journal of Mathematical Education in Science and Technology)
- Analogies and Active Engagement: Introducing Computer Science(Jennifer Parham-Mocello, Martin Erwig, M. Niess, 2024, Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1)
- CARBGAME (CArd & Board GAmes in Medical Education) — A Gamification Innovation in Physiology Education to Promote Active and Team- Based Learning(K. M. Surapaneni, 2024, Physiology)
- Embodied learning of science concepts through augmented reality technology(Nasser Mansour, Ceren Aras, J. K. Staarman, S. Alotaibi, 2024, Education and Information Technologies)
- Gamified Teaching Strategies from the Perspective of Embodied Cognition Theory(Lin Xiao, 2024, International Journal of Education and Humanities)
- Embodied Learning Through Drama-Based Situatedness Using Immersive Technology in the Classroom(Jen-Hang Wang, Mahesh Liyanawatta, Chia-Ying Lee, Yu-Ling Huang, Su-Hang Yang, Gwo-Dong Chen, 2023, 2023 IEEE International Conference on Advanced Learning Technologies (ICALT))
- Immersive Classic Literature with VR: A Study of Emotional and Thematic Engagement Through Embodied Cognition(O. Ismail, A. Alkhayat, 2025, 2025 Eighth International Women in Data Science Conference at Prince Sultan University (WiDS PSU))
- Embodied Learning in Immersive Smart Spaces(M. Gelsomini, Giulia Leonardi, F. Garzotto, 2020, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems)
- Investigating learning behaviors in desktop-based simulated and vr headset-based immersive 3D learning environments: a cross-media comparative study(Yuting Deng, Yanling Zhang, Ruibin Zhao, 2025, Education and Information Technologies)
- Wearables as Augmentation Means: Conceptual Definition, Pathways, and Research Framework(A. Sesay, J. Steffen, 2020, No journal)
- Designing Age-Friendly Motion-Based Exergames: A Human-Computer Interaction and Embodied Cognition Perspective(Yanchong Huang, Siyang Liu, 2025, Proceedings of the 2025 International Conference on Artificial Intelligence, Virtual Reality and Interaction Design)
- Integrating embodied cognition with the UTAUT model to investigate factors influencing the adoption of home-based health monitoring systems(Zhen Zhao, Kaifeng Liu, She Lyu, S. J. Wang, Yun Hei Chak, Hailiang Wang, 2025, Health Informatics Journal)
- A protocol for the comparison of reaching gesture kinematics in physical versus immersive virtual reality(Sara Arlati, N. Keijsers, G. Ferrigno, M. Sacco, 2020, 2020 IEEE International Symposium on Medical Measurements and Applications (MeMeA))
- Embodied Cognition and Value Internalization: Generative Logic and Practical Paths of Immersive Red Culture Study Empowered by Digital Intelligence(Zekai Chen, Feng Zhong, Yingmei Li, 2025, International Journal of Education and Social Development)
- Research on the Design of Peking Opera Performance Culture Education Based on Embodied Cognition Theory and Convolutional Neural Network(Heng Shao, Mengce Di, 2024, Proceedings of the 2nd International Conference on Educational Knowledge and Informatization)
- Performance Comparison of Convolutional and Multiclass Neural Network for Learning Style Detection from Facial Images(F. L. Gambo, G. Wajiga, Liyana Shuib, E. J. Garba, Aisha A. Abdullahi, Desmond Bala Bisandu, 2018, EAI Endorsed Trans. Scalable Inf. Syst.)
- Learning Style Tendency Analysis for Vocational Students(K. Agustini, I. M. Tegeh, 2019, Journal of Physics: Conference Series)
- Becoming An Animal? Exploring Proteus Effect Based on Human-avatar Hand Gesture Consistency(Tangjun Qu, Junjie Wang, Yongjiu Lin, Juan Liu, Chao Zhou, Baiqiao Zhang, Kaiyuan Jiang, Yulong Bian, 2024, 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR))
- Creating awareness of kinaesthetic learning using the Experience API: current practices, emerging challenges, possible solutions(M. Megliola, Gianluigi Di Vito, Fridolin Wild, P. Lefrere, Roberto Sanguini, 2014, No journal)
- From conception to fruition: Co-designing a digital exhibit incorporating embodied cognition to encourage young children’s computational thinking in a Science Discovery Centre(K. Murcia, Geoff Lowe, M. Mavilidi, Emma Cross, Michelle De Kok, William Peng, 2024, The Australian Educational Researcher)
- Robotic Assessment of Wrist Proprioception During Kinaesthetic Perturbations: A Neuroergonomic Approach(E. D'Antonio, E. Galofaro, J. Zenzeri, F. Patané, J. Konczak, M. Casadio, L. Masia, 2021, Frontiers in Neurorobotics)
- Path Learning by Demonstration for Iterative Human–Robot Interaction With Uncertain Time Durations(Deqing Huang, Jingkang Xia, Chenjian Song, Xueyan Xing, Yanan Li, 2024, IEEE Transactions on Cognitive and Developmental Systems)
- Impact of computerized simulation education, video-based learning, and kinesthetic learning in delivering the wrist and hand assessment among physiotherapy students – A qualitative study(Raveena Kini, Vrushali Panhale, 2025, Adesh University Journal of Medical Sciences & Research)
- Exploring the kinaesthetic sensitivity of skilled performers for implementing movement instructions.(Georgia Giblin, D. Farrow, M. Reid, K. Ball, B. Abernethy, 2015, Human movement science)
- Developing a Novel fMRI-Compatible Motion Tracking System for Haptic Motor Control Experiments(M. Rodríguez-Ugarte, Anastasia Sylaidi, A. Faisal, 2014, No journal)
- Visual learning style-based chemistry mental model representation through transformative online learning(A. Winarti, A. Almubarak, P. Saadi, 2021, Journal of Physics: Conference Series)
- A natural user interface game for the evaluation of children with learning difficulties(E. Chatzidaki, M. Xenos, Charikleia Machaira, 2017, No journal)
- Reshaping Craft Learning: Insights from Designing an AI-Augmented MR System for Wheel-Throwing(Hongfei Ji, Peiyu Hu, Dina El-Zanfaly, 2025, Proceedings of the 2025 ACM Designing Interactive Systems Conference)
- A Study on Multi-modal Interactive Experience Design of Cantonese Dragon Boat Culture from the Perspective of Embodied Cognition(Shangzhong Lei, Liang Tan, 2025, Proceedings of the 2025 International Conference on Artificial Intelligence, Virtual Reality and Interaction Design)
- Application Model of Museum Cultural Heritage Educational Game Based on Embodied Cognition and Immerse Experience(Jingwen Zhang, Tong Zhu, Cheng Hu, 2025, ACM Journal on Computing and Cultural Heritage)
- A Study on Immersive Experience Characteristics Based on Embodied Cognition : Focusing on Archaeological Site Museums in China in the Third Quarter of 2024(Manqi Wang, HyunA Park, 2024, Korea Institute of Design Research Society)
- The Willful Marionette: Modeling Social Cognition Using Gesture-Gesture Interaction Dialogue(M. Mahzoon, M. Maher, Kazjon Grace, Lilla LoCurto, Bill Outcault, 2016, No journal)
- Virtual environments and autism: a developmental psychopathological approach(Gnanathusharan Rajendran, 2013, J. Comput. Assist. Learn.)
- Participatory Digital Media Model for International Chinese Language Education: Mechanisms, Empirical Validation, and Strategic Pathways(Jie Zhang, Kotchaphan Youngmee, Khachakrit Liamthaisong, 2025, Journal of Cultural Analysis and Social Change)
- Learning Novel Words in an Immersive Virtual‐Reality Context: Tracking Lexicalization Through Behavioral and Event‐Related‐Potential Measures(Lu Jiao, Yue Lin, John W. Schwieter, Cong Liu, 2025, Language Learning)
- Accessible American Sign Language Learning in Virtual Reality via Inverse Kinematics(Jeremy Immanuel, Santiago Berrezueta-Guzman, 2025, Virtual Worlds)
- Learning Styles, Engagement and Anxiety in AI‐Mediated Writing: A Multimodal Feedback Study(Yi Ren, Wei Lun Wong, J. Barrot, L. Zhang, 2026, International Journal of Applied Linguistics)
合并后的分组构建了一个从理论底层到技术中台再到应用终端的完整具身学习研究体系。首先,理论组夯实了4E认知与神经机制的基础;其次,XR技术组与具身智能组分别从环境构建与智能交互两个维度提供了技术实现路径;手势与动作组深入剖析了身体经验转化为认知能力的微观过程;最后,跨学科应用组展示了具身学习在教育公平、医疗康复及文化传承中的广泛社会价值。整体趋势显示,具身学习正从单一的心理学概念演变为一种整合AI、XR与多模态交互的跨学科教育范式。
总计149篇相关文献
This empirical study investigates the embodied cognition effect of AI educational tools, focusing on the relationship between learners' physical engagement and deep learning outcomes. Data from 272 learners across diverse age groups and academic backgrounds were collected via questionnaires. Statistical analyses, including correlation and regression, reveal significant positive associations between physical engagement dimensions (body posture, gesture actions, physiological arousal) and deep learning performance. These findings highlight the importance of integrating embodied cognition principles into AI educational tool design, offering practical implications for enhancing learning effectiveness.
This study developed design principles for an augmented paper-based mathematics learning environment (APMLE) from an embodied cognition perspective and applied them to a lesson on exploring the volume of a sphere. The AR mathematics worksheet (ARMW), which overlays digital content onto static, physical paper-based worksheets, supports intuitive and immersive learning by integrating the advantages of paper media with handheld AR-based digital tools. This study investigates how such environments affect students' conceptual understanding in mathematics. Through a literature review, seven conceptual components were identified to form initial design principles. These principles were refined through expert validation and usability testing to ensure internal validity. The revised principles were then implemented in a classroom lesson and evaluated through retrospective analysis to examine their external validity. As a result, seven final design principles were developed: (1) the Principle of Mathematical Concept Orientation, (2) the Principle of Autonomy, (3) the Principle of Connection, (4) the Principle of Action Affordance, (5) the Principle of Perceptual Structure Formation, (6) the Principle of Reflection through Multiple Modalities and Formalization, and (7) the Principle of AR Tool Incorporation. In actual classroom settings, students engaged in embodied exploration by interacting with both the AR tool and the paper worksheet, using visual and verbal representations to construct mathematical meaning. The results demonstrate that APMLE can serve not only as a visual aid but also as a central medium for supporting embodied learning and conceptual development in mathematics. This study presents both theoretical and practical implications for the design and application of AR-based tools in educational contexts.
No abstract available
Abstract This study develops a mathematical model of embodied cognition integrated with personalized learning within an interactive Virtual Reality (VR) environment, employing nonlinear systems theory to optimize educational outcomes. The study mathematically formulates the interactions between learners’ cognitive states, bodily actions, and environmental inputs by applying state-space models and multi-modal data analysis. The framework incorporates real-time affective feedback and dynamic adjustment mechanisms, improving mental performance and reducing cognitive load. A randomized controlled trial validates the model, demonstrating significant enhancements in learning satisfaction and knowledge retention. This research provides a novel quantitative basis for integrating nonlinear mathematical methods into the design and optimization of personalized learning systems in VR.
Cultural heritage museum educational games can create an adaptive, triggering, immersive, and induced learning atmosphere for the development of multiple intelligences, and then, become an efficient activity platform to guide, stimulate and strengthen the development of multiple intelligences by integrating various intelligences into tips, questions, and game challenges, users can create opportunities to learn knowledge and skills, gain social support, and improve their self-efficacy. At the same time, the complete analysis of the educational games of museum cultural heritage must also include the evaluation of the influences of higher-level knowledge and emotion, so as to promote the multi-dimensional and all-round evaluation of the learning effects of museums. Therefore, it is necessary to provide effective references to museums to realize the educational activities of embodied experience combined with virtual immersion. We put forward an application model of museum educational games based on embodied cognition theory and immersion theory. Based on this model, we choose multi-intelligences theory as the basis of the evaluation of educational games of museum cultural heritage and put forward available teaching aids and practical suggestions to promote the design and implementation of such educational games in museums.
No abstract available
<jats:p/>
No abstract available
With the fast evolution of Virtual Reality (vr) technology, new prospects opened for embodied learning. Learners can now manipulate digital representations of abstract concepts and make sense of them through sensorimotor stimulation. However, in research, embodiment is explored from several perspectives, which, we argue, should be considered within a same framework. In this paper, we describe three major perspectives relevant for embodied learning in Virtual Reality (vr): embodied cognition, embodied interaction, and avatar embodiment. We organize these perspectives within one common interdisciplinary framework, and discuss resulting design opportunities for vr embodied learning interactions. Specifically, we show that embodied interaction does not necessarily support embodied cognition, and that breaking recommendations of avatar embodiment can actually support meaning-making. We believe our work offers novel avenues for future research and will foster interesting conversations in the hci community.
Background: The embodied cognition perspective is a collection of theories that all explicate a fundamental relationship between the perceptions, actions, and movement of the body and how humans think and reason. The application of the embodied cognition perspective to instructional interventions—often referred to as embodied learning design—has the potential to aid computing education researchers and practitioners who want to meaningfully integrate more sensory and “hands on” activities in their instruction. Purpose: The objective of this paper is to describe a process for, and to model, how computing designers and instructors can interrogate the embodied affordances of pedagogical activities in computing. Analytic Approach: We describe 12 embodied affordances present across six pedagogical activities. Our goal is to model the analysis of pedagogical activities for their embodied affordances. For coherence, we selected six pedagogical activities related to arrays in Java. Implications: Embodied learning offers theoretical ideas and a process for interrogating the characteristics of visual and physical resources in computing education. As with any design process, iteration and assessment are necessary to ensure that an embodied learning intervention is effective, but previous research suggests that learning is increased when students’ perceptual and physical interactions align with learning goals.
: The rapid development of artificial intelligence technology is triggering profound changes in the field of basic education, presenting new challenges to primary and secondary school teachers. Embodied cognition theory offers a fresh perspective on understanding these changes. This study, based on the tripartite view of embodied cognition theory— body-environment-cognition—systematically explores the teaching practice challenges faced by primary and secondary school teachers in the age of AI: the challenge between holistic development and teachers overemphasis on knowledge transmission, and the challenge between the intelligence of teaching tools and teachers weak technical skills. Corresponding strategies are proposed to address these challenges, such as shifting from disembodied educational concepts to a holistic educational philosophy that integrates body and mind, focusing on immersive embodied learning environments, continuously enhancing information literacy, and clarifying the educational value of intelligent technologies. The aim is to provide theoretical foundations and practical guidance for primary and secondary school teachers to navigate educational transformations in the intelligent era, promoting an organic unity of technology empowerment and educational authenticity, and driving high-quality development in basic education
Guided by the theory of embodied cognition, this paper explores the design framework of immersive Virtual Reality (VR) classrooms and their impact on learners' cognitive transfer. Through theoretical analysis and practical research, it is found that VR learning environments based on multi-sensory interaction can significantly improve the efficiency of knowledge internalization and promote the development of cross-situational transfer abilities. The research results provide theoretical and practical references for embodied learning design in the field of educational technology. This paper aims to provide a scientific guiding framework for the development and application of VR educational products through systematic research, and to promote innovation and development in educational technology.
In the new round of educational reform, large-scale personalization and teaching precision have become the current educational demands and practical challenges. Faced with the bottlenecks existing in the current precision teaching model, such as single data source, detachment from the needs of learning subjects, and limited research scope, from the perspective of embodied cognition, it is of typical significance to use metaverse technology to construct diverse teaching fields, conduct more accurate data analysis on learners in an intelligent environment, and build a generative knowledge system. Through the guidance of precise teaching design, strong correlation interaction characteristics are achieved, and the interactive relationship among individuals, activities and environments in the metaverse precision teaching scenarios and models is established. The intelligent environment provides adaptive feedback to individual learners, and conducts “intelligent adaptive” multi-modal spatial push to learners based on feedback data, thereby forming a data feedback mechanism and creating an operation system for classroom fields, so as to realize precise teaching supply.
Traditional architectural pedagogy struggles to convey true spatial depth through flat media and, when VR is introduced, often neglects the cybersickness that undermines both comfort and learning. We propose a four-phase "Theory-Experiment-Analysis-Design" model integrating embodied cognition. Mirror experiments and multimodal data show enhanced embodiment reduces CS, enabling data-driven spatial interventions. In this study, a controlled VR experiment with mirror/non-mirror conditions measured physiological (EDA, HRV) and psychological (SSQ, IPQ) indicators across four stages: theoretical framing, multimodal data collection, statistical analysis, and design transformation into "Window-Mirror" interactive prototypes. Cross-disciplinary guidance ensured technical rigor. Post-course assessments (N=39) revealed: (1) 61.54% demonstrated profound SoE understanding, (2) 64.1% achieved expert VR operation proficiency, (3) 74.4% reported improved spatial interaction skills, and (4) 87.18% preferred VR environments over traditional media. However, only 41% demonstrated the ability to analyze experimental data's implications for design. The framework effectively integrated human factors into pedagogy, offering empirical CS mitigation strategies for immersive environments.
Introduction Natural science education, as an important means to improve the scientific literacy of citizens, combines science education games with virtual reality (VR) technology and is a major developmental direction in the field of gamified learning. Methods To investigate the impact of VR science education games on learning efficiency from the perspective of embodied cognition, this study uses the China National Knowledge Infrastructure (CNKI) and Web of Science (WOS) databases as the main source of samples. A meta-analysis of 40 studies was conducted to examine teaching content, game interaction, and immersion mode. Results The study found that (1) VR science and education games have a moderately positive impact on the overall learning effect; (2) regarding teaching content, the learning effect of skill training via VR science and education games is significant; (3) regarding interaction form, the learning effect on active interaction is significantly better than that of passive interaction; (4) regarding immersion mode, somatosensory VR games have a significant impact on the enhancement of students’ learning; (5) regarding application disciplines, VR science education games have a greater impact on science, engineering, language and other disciplines; (6) regarding academic segments, the learning effect on college students is most significant; and (7) regarding experimental intervention time, short-term intervention is most effective. Discussion Accordingly, this article proposes strategies for VR science game design from the perspective of embodied cognition: a five-phase strategy including skill training, human-computer interaction, and environmental immersion, aiming to improve the learning effect and experience of users.
Abstract: This research explores how embodied cognitive can be computationally modeled and enhanced through the integration of Artificial Intelligence (AI) and Extended Reality (XR). Moving beyond traditional disembodied approaches to human-computer interaction, the study employs immersive, adaptive XR environments augmented by AI agents to simulate perception-action loops, contextual learning, and emotional responsiveness. Through mixed-method experiments involving biometric tracking, cognitive task performance, and real-time environmental adaptation, the study demonstrates that AI-XR systems significantly improve cognitive engagement, task accuracy, and situational awareness. The findings establish a foundational framework for next-generation human-machine symbiosis, offering scalable applications in education, therapy, and human augmentation. Keywords: Embodied cognitive Artificial Intelligence, Extended Reality, Human-computer interaction, Neuroadaptive systems, Cognitive performance, Biometric analytics, Immersive learning, Human-machine symbiosis, Adaptive environments
Embodied cognition theory underscores the importance of the interactions among the mind, body and environment, and instructors can make use of these connections to support learning through body-based learning activities. However, synchronous online learning presents challenges to this type of embodied pedagogy, as learning through physical actions is conspicuously absent. Over the course of a semester, I experimented with methods for engaging students in body-based learning during remote instruction. In this Classroom Note, I present an example of a lesson taught synchronously online, describe the technology used, and describe the opportunities for embodied reasoning during each activity.
This theme issue brings together researchers from diverse fields to assess the current status and future prospects of embodied cognition in the age of generative artificial intelligence. In this introduction, we first clarify our view of embodiment as a potentially unifying concept in the study of cognition, characterizing this as a perspective that questions mind–body dualism and recognizes a profound continuity between sensorimotor action in the world and more abstract forms of cognition. We then consider how this unifying concept is developed and elaborated by the other contributions to this issue, identifying the following two key themes: (i) the role of language in cognition and its entanglement with the body and (ii) bodily mechanisms of interpersonal perception and alignment across the domains of social affiliation, teaching and learning. On balance, we consider that embodied approaches to the study of cognition, culture and evolution remain promising, but will require greater integration across disciplines to fully realize their potential. We conclude by suggesting that researchers will need to be ready and able to meet the various methodological, theoretical and practical challenges this will entail and remain open to encountering markedly different viewpoints about how and why embodiment matters. This article is the part of this theme issue ‘Minds in movement: embodied cognition in the age of artificial intelligence’.
Embodied Cognition theory asserts that the physical body, its environment, and the interplay between them hold a pivotal role in the process of embodied learning. In comparison to traditional and computer-based learning methods, Virtual Reality (VR) enhances embodied learning, primarily owing to its capacity to offer a sense of situatedness and engage learners through physical interactions. Furthermore, the incorporation of haptic sensations in VR-based learning introduces tactile sensory memory, complementing the existing auditory and visual sensory dimensions. Despite the evident advantages of VR in facilitating embodied learning, the current body of literature has yet to delve into the dynamic behaviors exhibited by learners within Virtual Reality Learning Environments (VRLEs) during embodied interactions, and their inherent relationship with the embodiment phenomenon. In this paper, we take a significant step by integrating the Interaction Behavioral Data (IBD) collection mechanism, a development from our previous work, into a VRLE. This integration is achieved by adopting a structured framework for embodied learning activities within a VR context. Through an empirical study involving 14 participants, we meticulously logged their interaction traces using the IBD logger. Subsequently, we effectively extracted the diverse embodied learning activities carried out by these participants within the VRLE, from the interaction trace data collected in the IBD logger. Our endeavor aims to establish meaningful connections between these embodied interaction activities and the overarching concept of embodiment. These correlations, once identified, will serve as invaluable insights for VR content developers, offering clear guidelines for the design and creation of VRLEs that optimally facilitate enhanced learning experiences. Ultimately, this knowledge transfer will not only empower instructors and learners to harness VR as a potent educational tool within their regular teaching and learning practices but also foster the seamless integration of VR into mainstream education.
No abstract available
No abstract available
Introduction Embodiment in augmented reality has attracted increasing attention in educational research. This study investigated how the degree of embodied experience in augmented reality affects high school students’ learning achievement, cognitive load, and technology acceptance. Methods Drawing on embodied cognition and the Cone of Experience, augmented reality learning tasks were designed with different degrees of embodied experience and implemented in biology instruction on cell structure. A total of 122 Chinese high school students participated and were assigned to either low- or high-embodiment augmented reality experiences. Data were analyzed using analyses of covariance, with learners’ prior knowledge scores entered as a covariate to control for pre-existing differences in knowledge level. Results Students who engaged in high-embodiment augmented reality achieved better learning performance in both knowledge retention and transfer, and they also reported significantly lower cognitive load. In terms of technology acceptance, high-embodiment augmented reality enhanced perceived usefulness, while low-embodiment augmented reality was associated with higher perceived ease of use. Discussion These findings demonstrate that the degree of embodiment in augmented reality experiences differentially influences learning achievement, cognitive load, and technology acceptance, offering empirical evidence and practical guidance for optimizing embodied augmented reality design in education.
There has been a surge in interest in and implementation of motion capture (MoCap)-based lessons in animation, creative education, and performance training, leading to an increasing number of studies on this topic. While recent studies have summarized these developments, few have been conducted that synthesize existing findings into a theoretical framework. Building upon the Cognitive Affective Model of Immersive Learning (CAMIL), this study proposes the Cognitive Affective Model of Motion Capture Training (CAMMT) as a theoretical and research-based framework for explaining how MoCap fosters creative cognition in computer animation practice. The model identifies six affective and cognitive constructs: Control and Active Learning, Reflective Thinking, Perceptual Motor Skills, Emotional Expressive, Artistic Innovation, and Collaborative Construction that describe how MoCap’s technological affordances of immersion and interactivity support creativity in animation practice. The findings indicate that instructional and design methods from less immersive media can be effectively adapted to MoCap environments. Although originally developed for animation education, CAMMT contributes to broader theories of creative design processes by linking cognitive, affective, and performative dimensions of embodied interaction. This study offers guidance for researchers and designers exploring creative and embodied interaction across digital performance and design contexts.
No abstract available
This research explores the application of embodied cognition theory in the learning of facial expressions in Peking Opera, particularly how mimicking these expressions through facial recognition can deepen understanding and dissemination of Peking Opera culture. As an integral part of China's intangible cultural heritage, facial expressions in Peking Opera are crucial for understanding characters and plot. This study aims to demonstrate that mimicking these expressions can serve as an effective learning and dissemination tool, facilitating a deeper appreciation of Peking Opera culture. Using a mixed-methods approach, the study divided participants into experimental and control groups, with the experimental group learning through activities that involved mimicking Peking Opera facial expressions, while the control group used traditional observational learning methods. The study assessed changes in participants' understanding of Peking Opera culture before and after the learning activities, as well as their experiences and feedback on the learning process. Results indicated that the experimental group exhibited a deeper understanding of Peking Opera culture and higher learning interest post-study, highlighting the potential of embodied learning activities in cultural education. By mimicking facial expressions, participants not only enhanced their cognition of Peking Opera characters and emotions but also their understanding of the cultural background of Peking Opera. To enhance the analysis and feedback of facial expressions, a Convolutional Neural Network (CNN) based method was utilized. The CNN model effectively analyzed the facial expressions and provided real-time feedback to learners. These findings support the practical application of embodied cognition theory and offer a new perspective and method for Peking Opera education. This research underscores the importance of embodied cognition activities in promoting cultural learning and dissemination, providing insights for future educational practices that integrate technology and traditional culture.
From the Perspective of Embodied Cognition Theory, cognitive processes are deeply intertwined with bodily interactions and sensory experiences. By integrating this theoretical perspective, the paper proposes a series of gamified teaching strategies. Kinesthetic learning activities, gesture-based learning, augmented reality (AR) environments, interactive storytelling, physical manipulatives, and virtual reality (VR) simulations are explored. These approaches aim to engage multiple sensory modalities and motor functions, fostering a holistic learning experience. The gamified teaching methodologies offers a promising pathway for enhancing educational outcomes. And it makes English teaching more engaging and effective for non-native English learners.
Abstract: Engineering design is an important aspect of engineering education. The essence of engineering design process has been conveyed to students in diverse ways such as formal capstone and cornerstone projects or informal processes such as hackathons. The interdisciplinary nature of engineering design projects often prove challenging to students in multiple ways. As informal opportunities, hackathons have the potential to acquaint students with several skills key to interdisciplinary engineering design. This paper investigates the contribution of one such hackathon — a medical device innovation hackathon, in supporting student understanding and learning of engineering design process. Specifically, the examines the influence embodied cognition plays on supporting student understanding of an engineering design problem that cuts across multiple disciplines. In the case study, we describe an episode where a team of students go through the gradual process of comprehending the design problem along with the accompanying design complexities through descriptive narration, and simulation.
No abstract available
Immersive virtual environments (IVEs) are increasingly being explored as potential educational tools. However, it is unclear which aspects of IVEs contribute to learning, including hardware modalities and learner responses (e.g. motivation, usability, cognitive load and presence). One IVE hardware modality particularly backed by theory is embodied controls, with their potential for leveraging embodied cognition for enhanced learning outcomes. This paper explores if embodied controls can be leveraged to enhance learning in an IVE by comparing language learning outcomes from an IVE using embodied controls, and a non-embodied control. It explores two words classes - verbs and nouns - to examine if there is a difference in learning outcome for embodied controls with actions (verbs) and object interactions (nouns). This paper also explores co-variables often linked with IVE learning (motivation, presence, cognitive load) to understand why learning gain occurs. It finds that leveraging embodied controls provides better learning outcomes, with no impact on cognitive load. It also finds that the benefit does not correlate with motivation or presence ratings, suggesting that embodiment-induced motivation or immersion is not the cause of the learning enhancements, and therefore this could be evidence for embodied cognition-based learning in IVEs.
At present, virtual simulation experiment is commonly used in teaching practice, greatly enriching students' learning experience. However, students still face the problems of less interactive experience, poor participation and a low degree of freedom in the process of interaction through virtual simulation experiment. With the popularity of virtual reality technology, somatosensory interaction and other new human-computer interaction technology, the innovative learning method of learning with physical experience has attracted extensive attention in academic circles. Based on the theory of embodied cognition, this paper constructs a virtual simulation experiment interaction model based on the theory of embodied cognition. The purpose of this model is to emphasize the embodiment of cognition and the embeddedness of cognitive environment, enhance the immersion of students, effectively stimulate students' interest in learning, and realize the dynamic interaction among students, learning resources and teachers in virtual environment The construction and development of the real experimental resources can be used for reference.
Within the field of education, the concept of active learning building on constructivism has emerged as a dominant framework of the past three decades. This perspective is critical to the objectivist idea that knowledge is something static, as an object to be acquired from the external world. Instead, it states that the learner is responsible for the knowledge construction, and therefore shall become autonomous towards this goal. From an epistemological point of view, despite the important shift of assumptions that this viewpoint has brought to education, constructivism still presents some shortcomings in terms of a change of the instructional paradigm. This paper takes a step forward and explores enactivism, as an alternative philosophical and educational worldview. It presents a theoretical discussion of the enactivist perspective and its differences from objectivism and constructivism. Enactivism proposes a more radical alternative to dualistic and objective approach, as it focuses on the intertwined and multiple interactions between mind, body and the environment. The two main perspectives of enactivism, which we grouped into the categories of “embodied cognition” and “situated cognition”, are present in the field of education. The paper relates them to the two core concepts of reflection and intentionality. Drawing on these theoretical considerations, the paper applies the framework of enaction to a fieldwork research in a Danish school discussing how this concept may provide some new lenses to understand the potential of participatory approaches to the implementation of a digital learning platform. The intervention was organised through two workshops. The first workshop use the technique of the future workshop (Jung & Müller, 1984), which includes a critique phase and a fantasy phase. The second workshop (14 days later) was a design-workshop. This intervention is an example of how to understand enactive modelling, considering the relations between the participants and the environment as a dynamic and emerging relation of autonomy-dependency, a symbiosis. The analysis shows that the implementation takes place into an ecological living system made up of humans, non-humans, things, and societal entities. For the teachers (and more general the humans) to possibly accept, appropriate, act and re-enact such a learning infrastructure, it is of great importance to establish spaces for reflections, which e.g. a future workshop provides, and to support and facilitate (alternative) enactments of some of the more hidden affordances of the digital learning platform.
Kinaesthetic learning is expressed when physical actions are used to connect concept development to reality, for example through model building, trial and error practice, or role-play interactions. Learning through a kinaesthetic modality is highly effective and complementary to other learning modalities. Recent advances in gamification for education have increased access to science simulations and learning online. However, the transfer of offline kinaesthetic techniques to online learning remains under-researched and poorly implemented on affordable, scalable platforms. Here we describe an accessible approach for educators on how to incorporate online kinaesthetic aspects into lessons through use of a scalable and affordable framework developed called the ‘Kinaesthetic Learning System’ (KLS). This framework should be of particular use for learning complex molecular life science topics but can be adapted and modified independently by the educator to address different knowledge levels and for expansion to other disciplines.
No abstract available
No abstract available
Bachelor of Physiotherapy in India is a four-and-a-half-year course wherein, third and final year have considerable clinical and case-based teaching. Wrist and hand complex is known to have intricate anatomical makeup and biomechanical function making it necessary to practice interesting, innovative teaching-learning techniques for undergraduate physiotherapy students. Thus the objective was to check the efficacy of computerized simulation education, video based learning and kinesthetic learning on learning outcomes. Twenty-two third year physiotherapy undergraduate students were recruited in the study. After an initial Visual, Aural, Read/Write, and Kinesthetic (VARK) analysis, sessions were divided into four parts wherein computerized simulation education, video-based learning with kinaesthetic learning were incorporated. At the end of the 4 sessions, anonymous written feedback in the form of reflection was obtained and a timed multiple-choice quiz was taken after 14 days of the session to check retention. The reflections were analysed for all participants which stated 95.45% participants deemed this method superior for better understanding. Further assessment stated that this form of learning was more useful for their understanding with demonstration of the performance-based outcome measures helping in better understanding and retention of the assessment methods, ensuring more interaction between the students and the lecturer, and ensuring more engagement. The median score of the surprise quiz was 7 out of 10 with an interquartile range of 5 to 9 marks. Computer simulation in combination with video based and kinaesthetic learning proved to have a better outcome with understanding and retention among third year physiotherapy students.
This article presents a path learning method through physical human–robot interaction (pHRI) based on a stretch-compression iterative learning control (ILC) scheme and contouring impedance control. The robot learns a task path desired by the human user through a kinaesthetic interface and provides physical assistance to the human user in repetitive interactions. Due to the uncertainty of the human user’s force and motion, the time duration of each iteration may be different, so a novel ILC scheme based on stretch and compression operation is proposed to update the reference trajectory of the robotic manipulator. By attaching the Frenet–Serret frame to each point on the reference path, the control task is decomposed into impedance control in the tangential direction and position control in the normal or binormal direction constraining the human user on the reference path. Experiments on a 7-DOF Sawyer robot are carried out to show the effectiveness and robustness of the proposed method.
Artificial intelligence (AI) tools now permeate English academic writing. However, evidence on how feedback modalities align with student differences and with psychological mechanisms remains limited. Prior work often reduced learning styles to simple matches with delivery modes and treated learning engagement and writing anxiety as peripheral. A unified account that connects multimodal AI feedback, perceptual styles, engagement, anxiety, and writing performance has remained scarce. This study addressed these gaps through a cross‐sectional survey of 358 Mandarin L1 undergraduates from two Malaysian universities that measured text‐based (TF), linguistic (LF), and interactive feedback (IF); four perceptual styles (visual, auditory, kinaesthetic and tactile); learning engagement; writing anxiety; and writing performance. Instrument quality exceeded conventional thresholds; exploratory and confirmatory factor analyses supported construct validity, and a structural equation model with tests for mediation and moderation estimated the proposed paths. TF showed no direct effect on performance, whereas LF and IF exhibited positive effects. All styles predicted performance, with kinaesthetic being the strongest, followed by visual, tactile, and auditory. Engagement increased under all feedback and style inputs and served as a mediator; anxiety strengthened most paths to performance, with no significant interaction for the tactile style. The evidence rejected a simple style‐tool match and supported a layered model in which discourse‐orientated and interactive feedback, aligned with embodied activity, improved writing quality. Teachers and developers should stage support from text‐based cues toward linguistic and interactive guidance, pair kinaesthetic or visual tasks with those channels, and tune intensity to students’ anxiety profiles.
Abstract: This study explores an online learning world where game development was a center piece and social learning was an inherent feature rather than imposed. Grounded in enactivism, this paper examines the affordances of this environment, focusing on sociocultural perspectives. This qualitative case study involved 35 participating educators. Data included educators’ online interactions and written work. The results showed that the learning environment afforded ample opportunities for social learning, as reflected in the extent and ways the educators interacted and collaborated with each other. A total of nine categories of interactions were identified, illustrating the diverse types of social communication. Culture of learning, trauma, and games contributing to the formation of a culture were some major themes identified in educators’ reflections.
Manipulation strategies based on the passive dynamics of soft‐bodied interactions provide robust performances with limited sensory information. They utilize the kinematic structure and passive dynamics of the body to adapt to objects of varying shapes and properties. However, these soft passive interactions make the state of the robotic device influenced by the environment, making control generation and state estimation difficult. This work presents a closed‐loop framework for dynamic interaction‐based grasping that relies on two novelties: 1) a wrist‐driven passive soft anthropomorphic hand that can generate robust grasp strategies using one‐step kinaesthetic teaching and 2) a learning‐based perception system that uses temporal data from sparse tactile sensors to predict and adapt to failures before it happens. With the anthropomorphic soft design and wrist‐driven control, it is shown that controllers can be generated robust to novel objects and location uncertainty. With the learning‐based high‐level perception system and 32 sensing receptors, it is shown that failures can be predicted in advance, further improving the robustness of the entire system by more than doubling the grasping success rate. From over 1000 real‐world grasping trials, both the control and perception framework are also seen to be transferable to novel objects and conditions. An interactive preprint version of the article can be found here: https://doi.org/10.22541/au.167687985.55182112/v1.
Position sense refers to an aspect of proprioception crucial for motor control and learning. The onset of neurological diseases can damage such sensory afference, with consequent motor disorders dramatically reducing the associated recovery process. In regular clinical practice, assessment of proprioceptive deficits is run by means of clinical scales which do not provide quantitative measurements. However, existing robotic solutions usually do not involve multi-joint movements but are mostly applied to a single proximal or distal joint. The present work provides a testing paradigm for assessing proprioception during coordinated multi-joint distal movements and in presence of kinaesthetic perturbations: we evaluated healthy subjects' ability to match proprioceptive targets along two of the three wrist's degrees of freedom, flexion/extension and abduction/adduction. By introducing rotations along the pronation/supination axis not involved in the matching task, we tested two experimental conditions, which differed in terms of the temporal imposition of the external perturbation: in the first one, the disturbance was provided after the presentation of the proprioceptive target, while in the second one, the rotation of the pronation/ supination axis was imposed during the proprioceptive target presentation. We investigated if (i) the amplitude of the perturbation along the pronation/supination would lead to proprioceptive miscalibration; (ii) the encoding of proprioceptive target, would be influenced by the presentation sequence between the target itself and the rotational disturbance. Eighteen participants were tested by means of a haptic neuroergonomic wrist device: our findings provided evidence that the order of disturbance presentation does not alter proprioceptive acuity. Yet, a further effect has been noticed: proprioception is highly anisotropic and dependent on perturbation amplitude. Unexpectedly, the configuration of the forearm highly influences sensory feedbacks, and significantly alters subjects' performance in matching the proprioceptive targets, defining portions of the wrist workspace where kinaesthetic and proprioceptive acuity are more sensitive. This finding may suggest solutions and applications in multiple fields: from general haptics where, knowing how wrist configuration influences proprioception, might suggest new neuroergonomic solutions in device design, to clinical evaluation after neurological damage, where accurately assessing proprioceptive deficits can dramatically complement regular therapy for a better prediction of the recovery path.
Speech is the major way of human communication, but when it is limited, humans move to tactile kinaesthetic communication. People with speech-hearing impairments use sign language as an example of such adaptations. The deaf community uses Indian sign language (ISL) throughout India. In India, 250 licensed sign language interpreters are serving a deaf population of 1.8 to 7 million individuals. ISL interpreters are badly needed at institutes and places where persons with hearing impairments communicate. An Indian sign language picture database for English alphabets is established in this project. To prepare it for training, several pre-processing techniques were used. The effectiveness of deep learning neural networks is frequently influenced by the quantity of data available. As a result, data augmentation, a strategy for adding more and diverse samples to train datasets, was used to boost the effectiveness and outcomes of machine learning models. Our model is trained in CNN models utilizing transfer learning methodologies, with an accuracy of 95% for vgg16 and an accuracy of 92% for the inception model. More study on this research, as well as real-time implementation, has the potential to better connect people with hearing loss to society.
ABSTRACT The development of accurate spatial conceptualisations of molecular biology structures, processes and interactions represents a critical learning outcome for a wide array of life sciences students. In this study, we investigated student learning through a bespoke virtual reality (VR) simulation focused on interactive assembly of an expression vector and visualisation of the recombinant protein expression process, encompassing constructivist pedagogical principles. A mixed methods approach was pursued involving surveys, assessments, and online simulation data capture to facilitate student partnered inputs into virtual simulation design and implementation in the teaching of molecular and cellular biology. The scope of student learning modalities, effective teaching practices and perspectives on challenging curriculum were also captured to position VR implementation within a wider learning framework. Students expressed a consensus view towards the application of this innovative teaching modality and appear well equipped to adapt to this new teaching and learning approach. The visual and kinaesthetic elements of the simulations were found to offer a unique entry point to challenging and abstract molecular concepts that has the potential to transform life science education for students, where a partnership approach can best deliver a roadmap for effective integration into the curriculum.
Background & Objectives: Traditional physiological education predominantly relies on lecture-based teaching, which can be insuffcient for effectively engaging students and preparing them for the ever- evolving and collaborative healthcare landscape. Gamification offers a promising alternative to bridge this educational gap. Gamification is emerging as an active learning innovation in medical education to enhance student engagement and promote life-long learning in a unique and collaborative environment. The CARBGAME - CArd & Board GAmes in Medical Education was introduced and evaluated for its effectiveness in enhancing the active learning, application, sharing and assessment of knowledge in teams in cardiovascular physiology via gamification context. Methods: This mixed-method study involved 150 Phase I MBBS students. Prior to the game, students completed a pre-test with 20 multiple-choice questions and were divided into 25 small groups to compete in the board game designed for Cardiovascular Physiology. The students took turns throwing the dice and answering the questions on the game board and cards to continue moving forward. The first team to reach 100 and solve the case-based question was deemed the winner. Following the Board game, the post-test was conducted to evaluate the improvement in knowledge. Then, the students evaluated the effectiveness of CARBGAME- Cardiovascular Physiology using a 33-item questionnaire on a 5-point Likert scale. The feedback regarding CARBGAME- Cardiovascular Physiology was obtained on a 10-point rating scale and for qualitative analysis, students’ and faculties’ perceptions was recorded in in-depth small group interviews. All data were collected anonymous. The continuous variables' Mean and Standard Deviation (SD) were descriptively analyzed using univariate statistics, and the T-test and Analysis of Variance (ANOVA) were used to examine the differences. The Krushkall Wallis test and the Mann-Whitney U test were used to find variations in the non-normal distribution. The statistical package used was SPSS, version 17, developed by SPSS Inc. USA for Microsoft Windows. A P value of less than 0.05 was deemed significant. Results: Most of the students were multimodal (40%) and Kinaesthetic (33%) learners. A highly significant improvement in knowledge was evident with the pre-test score of 6.5 (1.1) (Mean & Standard Deviation) to 15.3 (1.7) in the post-test with a p-value less than 0.0001. CARBGAME has received exceptional positive responses from students in terms of creating fun (99%), relevant content (94.5%), consistency in rules (96.4%), less complexity (86.5%), active discovery (98.39%), motivation (96.4%), meaningful (99%) and experiential learning (97.3%), social learning (96.4%), fair time period (85.%6), sustainability (90%), effective feedback (95.5%), low stakes method (98.2%), reinforcing learning (96.4%), assessing knowledge (99%), competition (92.8%), collaborative learning (98.2%) and sense of accomplishment (96.39%). All the students and faculties have perceived and rated CARBGAME highly positively. Conclusion: The CARBGAME for cardiovascular physiology innovation has not only met pedagogical ideals but also sparked a transformation in medical education. By emphasizing individualization, feedback, active learning, motivation, social interaction, scaffolding, transfer of knowledge, and effective assessment, this dynamic approach resonates with both faculties and students. The enthusiastic responses from both faculty and students signify an imperative shift in the conventional medical curriculum. These responses highlight the exigent need to introduce innovative elements, such as educational games, to enhance student engagement and cultivate more profound and lasting learning experiences within physiology education. By demonstrating that gamified learning can be a driving force for change in physiology and medical education, this innovation paves the way for a more adaptable, engaging, and effective approach for optimising the learning experience of learners. Sources of Funding: This study did not recieve funding in any form Conflict of Interest: None to declare. This is the full abstract presented at the American Physiology Summit 2024 meeting and is only available in HTML format. There are no additional versions or additional content available for this abstract. Physiology was not involved in the peer review process.
Due to the COVID-19 pandemic, distance education starts playing a crucial role in higher education and the need for development of educational tools, helping the students learn better at home cannot be ignored. Teaching programming languages online is a complicated task and when the course subject is programmable logic design through Hardware Description Languages (HDLs), online teaching becomes a complex challenge. In this paper is presented a concept of a training environment that uses the visual programing technique to help the students create VHDL models of various digital devices. The students construct VHDL models by combining simple visual objects while the environment is providing guidance in real time and is preventing a wrong match of VHDL operators and signals. The strategy for building a model of a compex digital circuit by moving and connecting visual objects will help the students with visual kinaesthetic learning style to internalize the concept and the structure of the VHDL model. The training environment will benefit mostly the students, learning in distance mode, but it is also useful for face-to-face students, who find it difficult to assimilate the specifics of VHDL modelling.
There is a dearth of research concerning the learning effects of web‐based mapping tools on students with different learning characteristics. This study investigates the extent that different learning styles exert an influence on spatial thinking of students within a web‐based GIS mapping environment. Thirty six sophomores utilized the tools over one semester in a course guided by a blended learning approach. A learning style inventory, a self‐rating questionnaire and a survey were administered to these students to examine their learning styles, the development of their spatial thinking skills as well as factors influencing the enhancement of their spatial thinking skills. Results show that all learners have improved their spatial thinking skills after interacting with the GIS mapping tools. However, the visual and auditory learners have improved significantly more than the kinaesthetic learners (p = 0.024). The survey result from students shows that such differences may be attributable to the design of the web interface that matches the learning styles of the visual and auditory learners better than with that of the kinaesthetic learners. Our findings contribute to the current debate on students' learning styles as well as to help instructional designers and educators optimize learning in spatial thinking through personalized learning design. [ABSTRACT FROM AUTHOR]
Analysis of learning styles is one of the main things teachers need to do before carrying out teaching. The study of learning styles can provide an overview of how a teacher designs a learning concept by students’ learning styles. The learning process will show students’ mental models with their respective learning styles so that these mental models become the primary material for how teachers develop students’ cognitive. This research aimed to describe students’ mental models in terms of students’ visual learning styles. The method used is descriptive with qualitative and quantitative approaches with transformative-based learning concepts. The research results show that chemistry education students for chemistry learning innovation courses only have visual learning styles of 71.43% and audio by 28.55% and do not have kinaesthetic learning styles. This research focuses on visual learning styles to see students’ mental models. The conclusion is that students still need cognitive strengthening, especially the ability to interpret phenomena at the sub microscopic level. With the visual learning style, students are expected to transform their cognitive so that they have mental structures and models relevant in theory and terminology.
Improving the accuracy of learning style detection models is a primary concern in the area of automatic detection of learning style, which can be achieved either through, attribute/feature selection or classification algorithm. However, the role of facial expression in improving accuracy has not been fully explored in the research domain. On the other hand, deep learning solutions have become a new approach for solving complex problems using Deep Neural networks (DNNs); these DNNs have deep architectures that are capable of decomposing problems into multiple processing layers, enabling and devising multiple mapping of complex problems functions. In this paper, we investigate and compare the performance of Convolutional Neural Network (CNN) and MultiClass Neural Network (MCNN) for classification of learners into VARK learning-style dimensions (i.e Visual, Aural, Reading Kinaesthetic, including Neutral class) based on facial images. The performances of the two networks were evaluated and compared using square mean error MSE for training and accuracy metric for testing. The results show that MCNN offers better and robust classification performance of VARK learning style based on facial images. Finally, this paper has demonstrated a potential of a new method for automatic classification of VARK LS based on Facial Expressions (FEs). Based on the experimental results of the models, this approach can benefit both researchers and users of adaptive e-learning systems to uncover the potential of using FEs as identifier learning styles for recommendations and personalization of learning environments.
Exploring the kinaesthetic sensitivity of skilled performers for implementing movement instructions.
No abstract available
The use of e-Learning in education today is still conventional, which is still limited to the delivery of learning material and not yet fully managed as a learning management system (LMS) and assumes the characteristics of all students is homogeneous. In fact, every student has characteristics, especially different learning styles. Therefore, students who utilize e-Learning do not necessarily get the right material in learning, which results learning being less optimal. This research aimed to investigate the tendency of students’ learning style which will be used as the base in designing an adaptive e-Learning system with Visual, Auditory, and Kinaesthetic (VAK) oriented, as one of solutions to increase learning quality. Survey method was used in this study on vocational students in Buleleng Regency. The sample of the study was determined by using stratified random sampling technique to determine 4 vocational schools (SMK) which have Computer Technique and Network department where the amount of students in each school was 30 students with 2 teachers who responsible to the subject related. The variable of this study was students’ response to the questionnaire to identify the tendency of students’ learning style with VAK oriented. The data were obtained from questionnaire, interview, and observation of students’ learning process in classroom. The data was analysed descriptively. The result of the study showed that students’ learning style tendency in adaptive e-Learning system design with VAK oriented on vocational students in Buleleng Regency was students tend to learn with Kinaesthetic style.
The aim of this paper is the development of application learning system using implementation of Visual, Auditory, Kinaesthetic and Tactile learning model to help mild retarded children in alphabetical and numeric learning. There are 4 stages to go through for application development, which are concept the system to describe of learning model that combine into technology mobile, design system stages to describe the system architecture and requirements specification for system development, Assembly and development software, and last do testing system using pre and post-test involving 33 children with special needs in an educational institution. The application learning system built based on for mobile applications because it can be used by children anywhere and anytime. The Experimental result it show that applications usage showed an increase in understanding of language learning by 26.45% and computational learning by 17.60%, based on that with this application the child can learn to understand the language and calculation better
This paper presents an alternative approach for the diagnosis of learning difficulties in children. A game-based evaluation study, using Kinaesthetic Learning Difficulties Diagnosis (Kin-LDD), was performed during the actual diagnosis procedure for the identification of learning difficulties. Kin-LDD is a serious game that provides a gesture-based interface and incorporates spatial and time orientation activities. These activities assess children’s cognitive attributes while they are using their motor skills to interact with the game. The aim of this work was to introduce the fun parameter to the diagnostic process, provide a useful tool for the special educators and investigate potential correlations between in-game metrics and the diagnosis outcome. An experiment was conducted in which 30 children played the game during their official assessment for the diagnosis of learning difficulties at the Center for Differential Diagnosis, Diagnosis and Support. Performance metrics were collected automatically while the children were playing the game. These metrics, along with questionnaires appropriate for children and post-session interviews were later analyzed and the findings are presented in the paper. According to the results: (a) children evaluated the game as a fun experience, (b) special educators claimed it was helpful to the diagnostic procedure, and (c) there were statistically significant correlations between in-game metrics and the category of learning difficulty the child was characterized with.
No abstract available
No abstract available
The enhancement of generalization in robots by large vision-language models (LVLMs) is increasingly evident. Therefore, the embodied cognitive abilities of LVLMs based on egocentric videos are of great interest. However, current datasets for embodied video question answering lack comprehensive and systematic evaluation frameworks. Critical embodied cognitive issues, such as robotic self-cognition, dynamic scene perception, and hallucination, are rarely addressed. To tackle these challenges, we propose ECBench, a high-quality benchmark designed to systematically evaluate the embodied cognitive abilities of LVLMs. ECBench features a diverse range of scene video sources, open and varied question formats, and 30 dimensions of embodied cognition. To ensure quality, balance, and high visual dependence, ECBench uses class-independent meticulous human annotation and multi-round question screening strategies. Additionally, we introduce ECEval, a comprehensive evaluation system that ensures the fairness and rationality of the indicators. Utilizing ECBench, we conduct extensive evaluations of proprietary, open-source, and task-specific LVLMs. ECBench is pivotal in advancing the embodied cognitive capabilities of LVLMs, laying a solid foundation for developing reliable core models for embodied agents. All data and code is available at https://github.com/RhDang/ECBench.
Embodied cognition argues that intelligence arises from sensorimotor interaction rather than passive observation. It raises an intriguing question: do modern vision-language models (VLMs), trained largely in a disembodied manner, exhibit signs of embodied cognition? We introduce ENACT, a benchmark that casts evaluation of embodied cognition as world modeling from egocentric interaction in a visual question answering (VQA) format. Framed as a partially observable Markov decision process (POMDP) whose actions are scene graph changes, ENACT comprises two complementary sequence reordering tasks: forward world modeling (reorder shuffled observations given actions) and inverse world modeling (reorder shuffled actions given observations). While conceptually simple, solving these tasks implicitly demands capabilities central to embodied cognition-affordance recognition, action-effect reasoning, embodied awareness, and interactive, long-horizon memory from partially observable egocentric input, while avoiding low-level image synthesis that could confound the evaluation. We provide a scalable pipeline that synthesizes QA pairs from robotics simulation (BEHAVIOR) and evaluates models on 8,972 QA pairs spanning long-horizon home-scale activities. Experiments reveal a performance gap between frontier VLMs and humans that widens with interaction horizon. Models consistently perform better on the inverse task than the forward one and exhibit anthropocentric biases, including a preference for right-handed actions and degradation when camera intrinsics or viewpoints deviate from human vision. Website at https://enact-embodied-cognition.github.io/.
Despite advances in embodied AI, agent reasoning systems still struggle to capture the fundamental conceptual structures that humans naturally use to understand and interact with their environment. To address this, we propose a novel framework that bridges embodied cognition theory and agent systems by leveraging a formal characterization of image schemas, which are defined as recurring patterns of sensorimotor experience that structure human cognition. By customizing LLMs to translate natural language descriptions into formal representations based on these sensorimotor patterns, we will be able to create a neurosymbolic system that grounds the agent's understanding in fundamental conceptual structures. We argue that such an approach enhances both efficiency and interpretability while enabling more intuitive human-agent interactions through shared embodied understanding.
Objective: Factors influencing users’ adoption of the home-based health monitoring system (HHMS) were examined by integrating embodied cognition with the unified theory of acceptance and use of technology (UTAUT) model. Methods: Data from 459 survey respondents were analyzed using partial least squares structural equation modeling (PLS-SEM). Results: The model explained 59.7% of the variance in behavioral intention to use the HHMS (typical range: 40%–60%). Perceived contextual adaptation, perceived sensorimotor feedback, and perceived body awareness significantly influenced behavioral intention. Perceived body awareness (i.e., an individual’s ability to perceive and interpret bodily signals) was identified as a crucial factor affecting performance expectancy, effort expectancy, facilitating conditions, and social influence. Conclusions: The integration of embodied cognition with the UTAUT model contributes to theoretical advancements and demonstrates the importance of body awareness in users’ adoption of the HHMS, providing practical guidance for the effective design of HHMS.
No abstract available
Motion-based exergames have emerged as a promising solution for promoting physical activity and health among older adults. However, most existing products are developed with younger users in mind, resulting in usability barriers for elderly players. This study examines age-related physiological, psychological, and cognitive characteristics through the lens of embodied cognition and explores their implications for exergame interaction design. We analyze three mainstream exergames (Ring Fit Adventure, Just Dance, and Switch Sports) using a three-layered model comprising the perceptual-sensory, embodied interaction, and cognitive-experiential levels to identify age-specific design obstacles. Based on these insights, we propose targeted design strategies that incorporate computer science techniques such as multimodal feedback integration, motion recognition optimization, and adaptive system personalization. In addition, this study draws on key human-computer interaction (HCI) concepts including motion sensing, real-time feedback, and user-centered interface design to guide age-friendly interaction strategies in exergames. Our findings aim to bridge the gap between interactive system engineering and gerontechnology, offering practical recommendations for inclusive exergame development
This study investigates the use of immersive technology, specifically virtual reality (VR), to enhance audience interaction with classic literature. By transforming Henrik Ibsen's $A$ Doll's House into an interactive VR experience using FrameVR, the study compares the effectiveness of VR and traditional videos in fostering emotional engagement, thematic understanding, and cultural reflection. Grounded in embodied cognition theory, the research posits that VR's immersive and interactive affordances enable deeper audience engagement with literary content by aligning cognitive processes with sensory and motor experiences. A quasi-experimental design was employed, with 84 participants divided into an experimental group (n=42) who engaged with the VR platform and a control group (n=42) who watched a traditional video. The analysis points out that the experimental group demonstrated higher emotional engagement (50% positive sentiments) and deeper thematic exploration (e.g., 71.4% focus on gender inequality) compared to the control group. These findings highlight the potential of immersive technologies to redefine audience interaction with literary texts, fostering empathy, cultural awareness, and critical reflection. Moreover, the transformation of classic works of literature can demonstrate how immersive technologies reflect a practical model for the United Nations Sustainable Development Goals (SDGs), particularly in promoting inclusive and equitable education (SDG 4) and leveraging technology for diversity and inclusion (SDG 10). By bridging literature, technology, and cognitive science, this research contributes to the growing discourse on the role of immersive technologies in enhancing audience engagement and advancing global sustainability goals.
As a vital intangible cultural heritage in the Lingnan region of China, Cantonese dragon boat culture carries profound historical memory, regional identity, and local sentiment. However, accelerated modernization has exposed this cultural legacy to risks of memory fragmentation, transmission erosion, and value distortion. This study aims to articulate the narrative of the Cantonese dragon boat tradition, reconstruct cultural memory, promote its digital and intelligent transformation, and facilitate the creative adaptation of its modern value. This study employs embodied cognition theory, cultural memory theory, and immersion theory. Through virtual reality and artificial intelligence technologies, exploring design strategies for immersive digital experiences while establishing a multi-modal interactive experience system. The research findings demonstrate that multi-modal interaction not only enhances sensory engagement and emotional resonance but also strengthens cultural recognition and identity. This study verified the effectiveness of embodied interaction-based cultural experience design and provided innovative strategies for the promotion and preservation of Cantonese dragon boat culture.
In order to solve the problems of lack of immersion and interactivity in the current digital communication of intangible cultural heritage (intangible cultural heritage), and the difficulty of stimulating users' cultural cognition and emotional resonance, this paper introduces the theory of embodied cognition as a design guide and proposes a user-centered interaction framework with practical application in digital heritage dissemination.By constructing a digital interaction model of the three elements of "perception-operation-situation", based on typical intangible cultural heritage elements in Shaanxi, a WeChat Mini Program is developed for young users, integrating visual stimulation, interactive participation, and cultural context to enhance engagement. In the questionnaire survey and user test, the proposed interaction model is verified. The results show that embodied interaction design can significantly enhance users' interest, participation and cultural understanding of intangible cultural heritage content, and provide a new design path and practical basis for the digital communication of intangible cultural heritage.
In the era where digital intelligence technology is deeply intervening in cultural heritage, red culture education is facing a critical opportunity to transform from "disembodied" symbolic inculcation to "embodied" experiential internalization. Traditional red culture study is often limited by a text-centric cognitive paradigm, leading to a disconnection between historical scenes and the physical experience of the learner, making it difficult to achieve deep value resonance. Based on the theory of Embodied Cognition, the body is not only the physiological basis of perception but also the cognitive subject of meaning generation. Digital intelligence technology, by constructing multimodal immersive fields, provides the technological possibility of "being personally on the scene" for red culture study, reconstructing the interactive relationship between the subject and the historical object. This paper aims to analyze the generative logic of immersive red culture study empowered by digital intelligence, explaining how it promotes the transition of the red gene from "physical presence" to "spiritual presence" through spatio-temporal resetting, sensory extension, and situational interaction, ultimately achieving a value leap from perceptual experience to rational identification and then to belief internalization. On this basis, combined with the exploration practices of red cultural resource digitization in Guangzhou and other places, this paper proposes paths for constructing a virtual-real symbiotic embodied narrative system, a multi-dimensional interactive physical participation mechanism, and a technological regulation path of value rationality, providing theoretical support and practical reference for the revitalization of red cultural resources and innovation in ideological and political education in the new era.
This study constructs an intelligent case teaching paradigm based on embodied cognition theory from the perspective of the deep integration of cognitive science and legal education.By deconstructing the embodied characteristics of legal fact cognition,the study proposes a three-in-one teaching theory model of "multimodal interaction-semantic embodiment-contextual reconstruction", which breaks through the cognitive dilemma of subject-object dichotomy in traditional case teaching.The research results reveal the theoretical value of artificial intelligence technology in promoting the transformation of legal cognition from"conceptual representation"to"contextual embodiment", and provide a new epistemological framework for the cultivation of rule of law talents in the digital era.
Driven by the digital revolution, virtual reality (VR) technology, as a cutting-edge innovation, is gradually reshaping the pedagogical landscape of ideological and political theory courses. Grounded in the theoretical framework of embodied cognition, this paper explores the construction of an intelligent educational ecosystem for VR-empowered ideological and political education, aiming to transcend the limitations of traditional “disembodied learning.” By analyzing the triadic interaction mechanism among “body–environment–cognition,” the study proposes a VR-based teaching model characterized by immersive experience and emotional resonance, termed the “embodied cognition” model for ideological and political courses. The operational logic of this model is elaborated across four dimensions: situational embodiment, experiential embodiment, interactive embodiment, and evaluative embodiment. Furthermore, the research critically examines potential risks associated with technological application, including subjectivity erosion, cognitive bias due to emotional overstimulation, and ethical dilemmas, while proposing corresponding countermeasures. This work aims to provide theoretical and practical insights for the innovation of ideological and political education in the digital era.
Abstract Children face challenges in socialization and development, and computer-assisted therapies offer new cognitive intervention avenues; while embodied cognition and AR technology are valued in child cognition research, their integration remains underexplored. This study examines AR embodied learning versus traditional 2D methods in enhancing children’s social cognition. Thirty-five children (6–12 years) engaged in an AR-embodied game and a 2D social game. Functional Near-Infrared Spectroscopy (fNIRS) measured brain activation, while questionnaires evaluated subjective experiences. Results indicated AR games induced significantly stronger cortical activation than 2D games, particularly in males. AR also scored higher in satisfaction, appeal, interaction, and enjoyment. Gender analysis revealed broader neural activation in boys using AR, suggesting its greater suitability for males. The findings underscore AR’s potential to enrich social cognition through immersive learning. Designers and educators should incorporate AR embodiment into educational tools and conduct longitudinal studies to assess sustained developmental impacts.
This study revisits the Mental Rotation Experiment (MRE) to explore how predictive processing and proprioception influence visuo-spatial performance. By manipulating the pose and orientation of test stimuli—and their spatial relationship with the observer—we observe significant response time improvements (over 200 ms) when stimuli align with embodied expectations derived from prior proprioceptive experiences. Based on these findings, we propose a model of visual cognition that integrates three concurrent, interacting processes: unconscious predictive processing, rapid pixel matching, and the conscious process of mental rotation using the mind’s eye. These mechanisms work together to enhance spatial task accuracy. Our insights have practical implications for Multiview navigation interfaces, such as CAD tools and surveillance systems, where optimised spatial arrangements can reduce cognitive load. This study highlights how spatial congruence and embodied cognition can inform usability and accessibility improvements, with potential applications in creative systems, special navigation tasks, and environments with altered gravity conditions.
Background/Objectives: In the treatment of autism spectrum disorders, families express the need for dedicated clinical spaces to manage emotional overload and to develop effective relational skills. Parent training addresses this need by supporting the parent–child relationship and fostering the child’s development. This study proposes a clinical protocol designed for psychotherapists and behavior analysts, based on the Autism Open Clinical Model (A.-O.C.M.), which integrates the rigor of Applied Behavior Analysis (ABA) with a phenomenological and embodied perspective. The model acknowledges technology—particularly artificial intelligence—as an opportunity to structure adaptive and personalized intervention tools. Methods: A multi-level prompt design system was developed, grounded in the principles of the A.-O.C.M. and integrated with generative AI. The tool employs clinical questions, semantic constraints, and levels of analysis to support the clinician’s reasoning and phenomenologically informed observation of behavior. Results: Recurrent relational patterns emerged in therapist–caregiver dynamics, allowing the identification of structural elements of the intersubjective field that are useful for personalizing interventions. In particular, prompt analysis highlighted how the quality of bodily and emotional attunement influences readiness for change, suggesting that intervention effectiveness increases when the clinician can adapt their style according to emerging phenomenological resonances. Conclusions: The design of clinical prompts rooted in embodied cognition and supported by AI represents a new frontier for psychotherapy that is more attuned to subjectivity. The A.-O.C.M. stands as a theoretical–clinical framework that integrates phenomenology and intelligent systems.
The conception of autonomous, intelligent, collaborative robots has been the subject of science fiction rather than science in the second half of the previous century, with practical applications limited to industrial machines without any level of autonomous, intelligent, and collaborative capacity. The new century is facing the challenge of pressing industrial and social revolutions (4, 5, 6, …) with the prospect of infiltrating robots in every sector of human society; however, this dissemination will be possible if and only if acceptable degrees of autonomy, intelligence, and collaborative capacity can be achieved. Scientific and technological innovations are needed within a highly multidisciplinary framework, with a critical integration strategy and functional characterization that must ask a fundamental question: the design of autonomous, intelligent, collaborative robots should aim at a unified single template to be mass-produced including a standard setup procedure for the functional adaptation of any single prototype, or should the design aim at “baby” robots with a minimal set of sensory-motor-cognitive capabilities as the starting point of a training and educational process in close connection with human companions (masters, partners, final users)? The former alternative is supported by EAI, i.e., the Embodied variant of the Artificial Intelligence family of computational tools based on large foundation models. The latter alternative is bio-inspired; namely, it attempts to replicate the computational structure of Embodied Cognitive Science. Both formulations imply embodiment as a core issue. Still, we think this concept has a markedly different meaning and practical implications in the two cases, although we are still far away from the practical implementations of either roadmap. In this opinion paper, we explain why we think the bio-inspired approach is better than the EAI approach in providing a feasible roadmap for developing autonomous, intelligent, collaborative robots. In particular, we focus on the importance of collaborative human-robot interactions conceived in a general sense, ranging from haptic interactions in joint physical efforts (e.g., loading/unloading) to cognitive interactions for joint strategic planning of complex tasks. We envision this type of collaboration only made possible by a deep human-robot mutual understanding based on a structural equivalence of their embodied cognitive architecture, based on an active, first-person acquisition of experience rather than a passive download of third-person knowledge.
Applications
ABSTRACT This study investigates the application of Virtual Reality (VR) in the educational field, particularly its integration with GAI technologies such as ChatGPT to enhance the learning experience. The research indicates that while VR provides an immersive learning environment fostering student interaction and interest, the lack of a structured learning framework and personalized feedback may limit its educational effectiveness and potentially affect the transfer of VR-learned knowledge to physical hands-on tasks. Hence, it calls for the provision of more targeted and personalized feedback in VR learning environments. Through a randomized controlled trial (RCT), this study collected data from 77 university students, integrating experiential learning in VR for acquiring AIoT knowledge and practical skills, and compared the effects of traditional feedback versus GPT feedback on promoting reflective thinking, learning motivation, cognitive levels, and AIoT hands-on abilities among the students. The results show that the group receiving GPT feedback significantly outperformed the control group across these learning indicators, demonstrating the effectiveness of GAI technologies in providing personalized learning support, facilitating deep learning, and enhancing educational outcomes. This study offers new insights into the integration of GAI technology in VR learning environments, paving new pathways for the development and application of future educational technologies.
No abstract available
This letter presents a physical human–robot interaction scenario in which a robot guides and performs the role of a teacher within a defined dance training framework. A combined cognitive and physical feedback of performance is proposed for assisting the skill learning process. Direct contact cooperation has been designed through an adaptive impedance-based controller that adjusts according to the partner's performance in the task. In measuring performance, a scoring system has been designed using the concept of progressive teaching (PT). The system adjusts the difficulty based on the user's number of practices and performance history. Using the proposed method and a baseline constant con-troller, comparative experiments have shown that the PT presents better performance in the initial stage of skill learning. An analysis of the subjects’ perception of comfort, peace of mind, and robot performance has shown significant difference at the $p< 0.01$ level, favoring the PT algorithm.
Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an action-value function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.
Background In Physical Human–Robot Interaction (pHRI), the need to learn the robot’s motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning. Objective The aim of this study was to test eye-tracking measures’ sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human–robot collaboration tasks involving an industrial robot for object comanipulation. Methods Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated. Results Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy. Conclusion The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload. Application Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.
Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
Communicative intent modulates production and comprehension of actions and gestures: A Kinect study.
Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension. We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative. Our study showed that during production the communicative context modulates space-time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable. Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
Susan Goldin-Meadow is the 2021 Recipient of the Rumelhart Prize. Goldin-Meadow's body of research addresses the roles of gesture in language creation, communication, learning, and cognition. In one major strand of her research, Goldin-Meadow has studied gestures in children who are not exposed to any structured language input, specifically, deaf children of hearing parents who do not expose their children to sign language. These children create a highly structured, language-like system with their hands-a homesign. In another major strand, Goldin-Meadow has focused on the gestures that people produce along with speech. She has examined how gestures contribute to producing and comprehending language at the moment of speaking or signing, how gestures contribute to learning language and to learning other concepts and skills, and how gestures may actually constitute and change people's thinking. This topic collection is made up of papers that represent and extend these strands of Goldin-Meadow's work. This introductory article provides a brief biography of Goldin-Meadow, and it highlights ways in which the contributions to the topic collection exemplify several notable characteristics of Goldin-Meadow's body of work, including (1) a focus on multiple timescales of behavior and behavior change; (2) use of diverse methods, approaches, and populations; and (3) considerations of equity and inclusion, both in research and in educational and clinical practice.
Virtual reality (VR) is an attractive technology for cognitive assessment, as it provides a more embodied experience compared with typical test situations, such as those using paper and pencil. In addition, VR can immerse individuals in complex situations similar to real-life ones, thereby improving the ecological validity (i.e., face validity) of the assessment. VR also offers improved scoring of tests as it facilitates the tracking of kinematic information and the temporal tracking of activities. This study assesses the correlation between scores on executive function assessments using standard neuropsychological tasks in paper-and-pencil format, on a tablet, and in three immersive VR environments, each designed to involve specific aspects of executive function. This study also aims to assess the correlation between these performance scores and a set of kinematic measures (speed, duration, and distance traveled by the hand) collected in VR. The outcomes, including performance scores and kinematic measures, correlate both with traditional assessment methods (such as paper and pencil, and computerized 2D tests) and with each other, suggesting their potential usefulness in clinical and research contexts. The discussion focuses on the advantages of embodied, situated, and spatialized tests for cognitive assessment and the benefits of kinematic tracking in VR tests for the quality of this assessment.
Most manual communicative gestures that humans produce cannot be looked up in a dictionary, as these manual gestures inherit their meaning in large part from the communicative context and are not conventionalized. However, it is understudied to what extent the communicative signal as such — bodily postures in movement, or kinematics — can inform about gesture semantics. Can we construct, in principle, a distribution-based semantics of gesture kinematics, similar to how word vectorization methods in NLP (Natural language Processing) are now widely used to study semantic properties in text and speech? For such a project to get off the ground, we need to know the extent to which semantically similar gestures are more likely to be kinematically similar. In study 1 we assess whether semantic word2vec distances between the conveyed concepts participants were explicitly instructed to convey in silent gestures, relate to the kinematic distances of these gestures as obtained from Dynamic Time Warping (DTW). In a second director-matcher dyadic study we assess kinematic similarity between spontaneous co-speech gestures produced between interacting participants. Participants were asked before and after they interacted how they would name the objects. The semantic distances between the resulting names were related to the gesture kinematic distances of gestures that were made in the context of conveying those objects in the interaction. We find that the gestures’ semantic relatedness is reliably predictive of kinematic relatedness across these highly divergent studies, which suggests that the development of an NLP method of deriving semantic relatedness from kinematics is a promising avenue for future developments in automated multimodal recognition. Deeper implications for statistical learning processes in multimodal language are discussed.
Abstract Silent gestures consist of complex multi‐articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.
Virtual environments are increasingly being used for upper limb rehabilitation in post-stroke patients. However, still there is no clear evidence that the movements performed in virtual reality are comparable to those performed in the physical world from the kinematic point-of-view. The goal of the proposed study is thus to determine if aimed reaching movements made in a 3D ecological and immersive virtual environment – displayed through a Head Mounted Display (HMD) – are comparable to movements performed in the real world. The study foresees the realization of two comparable environmental settings representing the shelf of a supermarket. Three different groups of subjects (healthy young adults, healthy elderly, and post-stroke subjects, n=15 each) are asked to reach 5 times toward 9 targets in 3 different conditions: virtual reality, physical reality, and physical reality while holding a controller. Their movements are tracked with a stereo-photogrammetric motion capture system; movement times, peak velocities, and joint angles are then extracted for analysis. This protocol will allow comparing reaching movements, and also excluding of the effects related to holding a controller. A preliminary trial reveled the feasibility of the protocol, thus the experiment will be carried out in the next months. If results will be encouraging, VR should be considered in rehabilitative treatments as a useful means to elicit patients’ motivation, but also appropriate movement synergies, thus promoting a better recovery of upper limb functions.
Along with the rapid advancement of Virtual Reality (VR) and the metaverse, interest in this technology has surged among game developers and in fields such as education and healthcare. VR has enabled the rise in immersive, gamified activities, whether for rehabilitation, therapy, or learning. Additionally, VR and Motion Capture (MoCap) have allowed developers to create further accessibility features for end-users with special needs. However, the excitement of using new technology often does not align with the end user’s use cases. The over-reliance on cutting-edge hardware can negatively impact most end users who lack access to such expensive tools. To this end, we conducted an inclusivity-focused study that enables learners to practice ASL in an immersive and engaging way using only head- and controller-based tracking. Our approach replaces full-body MoCap with Inverse Kinematics (IK) and simple controller mappings for upper-body pose and hand-gesture recognition, providing a low-cost, reproducible alternative to costly setups.
No abstract available
While multisensory super-additivity has been demonstrated in the context of visual articulation, it is unclear whether speech and co-speech gestures are similarly subject to super-additive integration. The current study investigates multisensory integration of speech and bodily gestures, testing whether biological motion signatures of co-speech gestures enhance cortical tracking of the speech envelope. We recorded EEG from 20 healthy adults as they watched a series of multimodal discourse clips from four conditions: AV congruent clips with co-speech gestures that were naturally aligned with speech, AV incongruent clips in which gestures were not aligned with the speech, audio-only clips in which speech was delivered in isolation, and video-only clips presenting the gesture content with no accompanying speech. As we hypothesize that the kinematics of co-speech gestures are sufficient to drive gestural enhancement of speech, our clips employed minimalistic "point-light" depictions of a speaker's movements: point-light talkers. Using neural decoder models to predict the amplitude of the speech envelope from EEG elicited in all four conditions, we compared speech reconstruction performance between multisensory (AV congruent) and additive models, that is, those representing the summed neural response across the two unisensory conditions. We found significant improvement in decoder scores for models trained on AV congruent trials relative to both audio-only and additive models. Forward models of brain activity indicated signatures of multisensory integration 140-160 msec following changes to the speech envelope. These results provide novel evidence for a multisensory enhancement effect of co-speech gesture kinematics on continuous speech tracking.
No abstract available
Interpreting three-dimensional models of biological macromolecules is a key skill in biochemistry, closely tied to students’ visuospatial abilities. As students interact with these models and explain biochemical concepts, they often use gesture to complement verbal descriptions. Here, we utilize an embodied cognition-based approach to characterize undergraduate students’ gesture production as they described and interpreted an augmented reality (AR) model of potassium channel structure and function. Our analysis uncovered two emergent patterns of gesture production employed by students, as well as common sets of gestures linked across categories of biochemistry content. Additionally, we present three cases that highlight changes in gesture production following interaction with a 3D AR visualization. Together, these observations highlight the importance of attending to gesture in learner-centered pedagogies in undergraduate biochemistry education.
Human cognition and behavior can be unconsciously affected by personal avatars in the virtual world, a phenomenon known as the Proteus Effect. When using first-person non-human avatars, the characteristics of virtual hands may also induce relevant cognitive and even behavioral patterns in real human hands. Therefore, evaluating human-avatar gesture consistency may be a potentially effective method for objectively assessing the Proteus Effect when using non-human avatars. To explore this question, we first created human and non-human avatars, including three animals. Then, we constructed a dataset of hand gestures and trained a model to dynamically recognize real-hand gestures that were consistent with corresponding avatar hands. Next, we designed a virtual reality experimental task involving grasping objects with intuitive gestures and performed a 2 (avatar type: human/non-human) $* 2$ (virtual hand: presence/absence of spontaneous movement) within-subject experiment to examine the effects of avatar characteristics on self-illusion and human-avatar gesture consistency. The results showed that participants performed a significantly larger percentage of gestures that were consistent with their currently used avatars. Additionally, participants did experience self-illusion when using non-human avatars, although the levels were significantly lower than those when using human avatars. Therefore, self-illusion may serve as a perceptual antecedent of the Proteus Effect, even with non-human avatars, inadvertently altering the behavioral gestures of their real hands. In conclusion, detecting human-avatar gesture consistency can help evaluate the Proteus Effect.
This article presents a novel digital method of capturing finger-based gestures on touchscreen devices for the purpose of exploring tracing gestures in educational research. Given that tracing has been found to support cognition, learning, and problem solving in educational settings, data related to the performance of these gestures are increasingly of interest to researchers. Most educational research methods exploring the use of hand gestures rely on in-person data collection, whether through direct observation or video recording of participants’ behavior for later analysis. These methods, while effective for observing gross movements, may not provide researchers with detailed insights into how learners interact with learning materials. Using custom tools to record touchscreen engagement on tablet computing devices can address this limitation while also providing the means to visually represent touch-based interactions with these devices. Geometry Touch is an iPad app developed and tested by the primary author as a part of a pilot study. The research study, theoretically grounded in cognitive load theory (CLT), demonstrated that Geometry Touch could efficiently collect data on touchscreen interactions while also providing potential avenues to quantify touchscreen interactions through computational means. The purpose of this article is to report on the development and testing of this app while providing an explanation of how it was used as a method of data collection by leveraging touchscreen technology. The article concludes by discussing how this digital method of capturing movement can provide further insight into how finger-based gestures can influence learning and, as such, could increase the reach of gesture-based research.
No abstract available
Current Multimodal Large Language Models (MLLMs) mainly focus on vision and language modalities, often overlooking the integration of other senses, such as tactile perception. In this paper, we present Improving Language Model Cognition with Tactility-Vision Fusion (TALON) to achieve tactility-vision fusion. We first develop a high-density flexible array tactile sensor, Hand-Scan, and deployed it on a data glove. Using the glove, we collect tactile information, and with a camera, we gather visual information to construct the TALON dataset, containing both tactile and visual data. We then train our TALON model using this dataset, achieving modality alignment. Our experiments demonstrate that the TALON model exhibits outstanding recognition capabilities with an accuracy rate of 99.45%, surpassing solely vision-language training (97.58%) and solely tactility-language training (70.47%). Particularly in complex gesture recognition tasks, the accuracy reached 98.82% (+3.06% over vision-language, +18.38% over tactility-language), showcasing the near-perfect performance and proving the effectiveness of tactility-vision fusion.
When people talk about kinship systems, they often use co-speech gestures and other representations to elaborate. This paper investigates such polysemiotic (spoken, gestured, and drawn) descriptions of kinship relations, to see if they display recurring patterns of conventionalization that capture specific social structures. We present an exploratory hypothesis-generating study of descriptions produced by a lesser-known ethnolinguistic community to the cognitive sciences: the Paamese people of Vanuatu. Forty Paamese speakers were asked to talk about their family in semi-guided kinship interviews. Analyses of the speech, gesture, and drawings produced during these interviews revealed that lineality (i.e., mother's side vs. father's side) is lateralized in the speaker's gesture space. In other words, kinship members of the speaker's matriline are placed on the left side of the speaker's body and those of the patriline are placed on their right side, when they are mentioned in speech. Moreover, we find that the gesture produced by Paamese participants during verbal descriptions of marital relations are performed significantly more often on two diagonal directions of the sagittal axis. We show that these diagonals are also found in the few diagrams that participants drew on the ground to augment their verbo-gestural descriptions of marriage practices with drawing. We interpret this behavior as evidence of a spatial template, which Paamese speakers activate to think and communicate about family relations. We therefore argue that extending investigations of kinship structures beyond kinship terminologies alone can unveil additional key factors that shape kinship cognition and communication and hereby provide further insights into the diversity of social structures.
No abstract available
Interactive Machine Learning is a promising approach for designing movement interaction because it allows developers to capture complex movements by simply performing them. We introduce a new tool being developed to make embodied interaction design faster, adaptable and accessible to developers of varying experience and background. Using the tool, we conduct workshops with creative practitioners and developers to explore techniques that equip users with embodied ideation design strategies encouraging full body interaction for immersive media.
Driven by the wave of digital intelligence, virtual reality (VR) technology, with its high immersion and interactivity, has an increasingly prominent application potential in the field of education, and has become an important force to promote educational innovation. Nevertheless, the impact of the degree of presence in a virtual reality learning environment on student engagement and whether a highly immersive experience imposes a cognitive load need to be further explored. This study employed a quasi-experimental design, involving 44 freshmen majoring in digital media technology from a university in Wuhan, China, who were randomly assigned to either a PC-based or VR-based learning environment. Through the implementation of the "chip assembly" virtual reality course teaching experiment, the study found that the highly embodied VR learning environment can effectively enhance the engagement of students. At the same time, the study also revealed that the highly embodied VR environment did not increase the cognitive burden of students, which may be attributed to the intuitive and easy-to-understand learning content provided by VR technology, effectively avoiding cognitive overload. This study provides specific cases and practical references for the application of VR technology in educational practice.
The study uses augmented reality (AR) technology to integrate virtual objects into the real learning environment for language learning. The English AR classroom is constructed using the system prototyping method and evaluated by semi-structured in-depth interviews. According to the flow theory by Csikszenmihalyi in 1975 along with the immersive factors from Trevino and Webster in 1992, the proposed system uses the computer information systems with augmented reality technology to create flow experiences and embodied cognition. The effects of the English AR classroom are evaluated by semi-structured in-depth interviews. Based on the opinions of the domain experts, the AR prototype system validates the possibility of carrying out digital immersive language learning and embodied cognition. In addition, the concerns of the curriculum designs based on this system are discussed to show the intension of the practical uses.
The integration of digital technologies into international Chinese language education has reshaped the modes, channels, and mechanisms of knowledge dissemination. This study explores the development and validation of a participatory digital media model for international Chinese language learning, focusing on the interplay between technological embodiment, learners’ perceptual experiences, and knowledge dissemination outcomes. Employing a mixed-methods approach that combines qualitative interviews and quantitative surveys, this research examines the evolution of digital media in international Chinese education through four developmental phases: the initial emergence of online instruction, the expansion of digital platforms, the proliferation of digital resources, and the current transition toward intelligent and immersive learning environments. The study constructs a mechanism model that elucidates how embodied environments and embodied perceptions jointly shape the effectiveness of knowledge dissemination. Empirical findings reveal significant positive relationships among online interaction, resource quality, embodied perception, and dissemination outcomes, while embodied perception mediates the relationship between the digital environment and learners’ satisfaction and continuance intentions. Challenges such as insufficient digital literacy among educators and inadequate evaluation frameworks are identified, and a four-dimensional optimization strategy is proposed to enhance future practices. The findings contribute theoretical insights into digital media’s role in international language education and offer practical pathways for leveraging AI, big data, and virtual reality technologies to promote high-quality, learner-centered Chinese language education in the digital era.
With the advancement of new media and digital technologies, artificial intelligence (AI), big data, 5G communication, virtual reality (VR), and augmented reality (AR) are reshaping dance education by enabling personalized learning, immersive training, and interactive experiences. However, as an art form grounded in bodily perception and emotional expression, dance education still requires pedagogical guidance; excessive reliance on technology may lead to the alienation of embodied experience. This study proposes a “technology–culture–body” triadic interaction model, grounded in TPACK, media convergence theory, and affordance theory, to examine the integration mechanisms of technology, pedagogy, and disciplinary knowledge within the digital transformation of dance education. The research highlights the significance of cultural context and embodied experience, revealing that while new media platforms enhance learning motivation, facilitate resource sharing, and promote global dissemination, algorithmic systems may inadvertently reduce cultural diversity. The study provides a theoretical framework and practical reference for the digital transformation and innovation of dance education, proposing sustainable development strategies and future research directions to support its high-quality growth and cultural innovation.
The growth of media technologies and maker culture has expanded craft learning from instructor-guided models to diverse self-directed approaches. However, mastering crafts such as ceramics remains challenging due to their embodied nature and the difficulty of tacit knowledge transfer. While Mixed Reality (MR) and Artificial Intelligence (AI) have supported embodied task learning, their application in craft remains underexplored. We present an AI-augmented MR ceramic guiding system to investigate the interplay between these technologies and craft practices, including how they influence instruction design, shape user perception, and transform learning contexts. Our system provides immersive multimedia instruction and real-time shape-based feedback using computer vision and large language models (LLMs) to guide learners in wheel-throwing on a pottery wheel. Through a Research-through-Design process, we co-designed and evaluated the system with twenty novices and experienced ceramic practitioners. We offer design insights for AI-MR craft learning systems and identify opportunities to extend their application to creative, collaborative, and broader craft-making scenarios.
With the rapid development of virtual reality (VR) technology, it has become a trend to apply VR technology to assist teaching in the field of education. At present, the learning of traditional poetry is often in the form of book learning. The lack of physical and sensory interaction in the learning process leads to a lack of initiative and enthusiasm for learning, resulting in low learning efficiency. Based on this background, this paper develops a poetry learning game based on VR technology, which creates interactive scenarios based on the depiction of poems and allows users to learn poems in an immersive experience. Users can complete their poetry learning objectives by experiencing the games plots and completing game tasks. Based on embodied cognition theory, the game applies the strong sense of presence and immersion of VR technology to enhance the interaction between the users body and the poetry scene, thus improving the users knowledge and understanding of the poetry, deepening their memory of the poetry and enabling them to learn in a more effective and interesting form. The game uses Cinema4D to design and model the poetry scenes and the game engine Unity to realize the interaction with the user. Through the study of the game scenes and user evaluation, the experimental results show that the game achieves the design purpose and has an application value.
Abstract. Digital media, such as interactive video, games, and immersive worlds, offer rich visual perspectives, often allowing one to experience events through another’s eyes. While prior research indicates that considering alternative perspectives facilitates understanding, little is known about how media-enhanced perspectives affect learning processes for higher-order concepts that require synthesis of ideas and making inferences such as reasoning about problems in science. Two experiments used digital video of a science instructional event to investigate features of visual perspective on engagement and knowledge construction. Study 1 showed that an embodied first-person viewpoint achieved using a head-mounted camera better supported learning than a traditional third-person view of the same event. In Study 2, applying a motion algorithm to both a first-person and third-person video allowed us to isolate the effects of viewpoint and camera motion. While the addition of artificial motion benefited learning for third-person viewers, only motion that is aligned with the actor’s actions and affect enhances first-person viewing. Findings are considered in terms of how certain media position learners in relation to educational content. Specifically, we argue that media features such as viewpoint and motion can be configured in ways to create “fields of potential action” that engage viewers and optimize conditions for learning.
No abstract available
Thinking in and understanding of three-dimensional structures is omnipresent in many sciences from chemistry to geosciences. Current visualizations, however, are still using two-dimensional media such as maps or three-dimensional representations accessible through two-dimensional interfaces (e.g., desktop computers). The emergence of immersive virtual reality environments, both accessible and of high-quality, allows for creating embodied and interactive experiences that permit for rethinking learning environments and provide access to three-dimensional information through three-dimensional interfaces. However, there is a shortage of empirical studies on immersive learning environments. In response to this shortcoming, this study examines the role of immersive VR (iVR) in improving students’ learning experience and performance in terms of penetrative thinking in a critical 3D task in geosciences education: drawing cross-sections. We developed a pilot study where students were asked to draw cross-sections of the depth and geometry of earthquakes at two subduction zones after visualizing the earthquake locations either in iVR or 2D maps on a computer. The results of our study show that iVR creates a better learning experience; students reported significantly higher scores in terms of Spatial Situation Model and there is anecdotal evidence in favor of higher reflective thinking in iVR. In terms of learning performance, we did not find a significant difference in the graded exercise of drawing cross-sections. However, iVR seems to have a positive effect on understanding the geometry of earthquake locations in a complex tectonic environment such as Japan. Our results, therefore, add to the growing body of literature that draws a more nuanced picture of the benefits of immersive learning environments calling for larger scale and in-depth studies.
This presentation proposes an approach to designing technology-enhanced learning (TEL) through the strategic integration of diverse multimodal media forms within a framework informed by the 4E+ view of cognition. The 4E+ cognition framework emphasises the embodied, embedded, enactive, and extended nature of cognition, suggesting that cognition is not solely confined to the brain but extends into the environment while involving the body's interactions with that environment (Carney, 2020; Jianhui , 2019; Menary, 2010; Newen, et al., 2018). In this theoretical context, our study explores how the combination of various modes of media, such as immersive technologies, digital interactive elements, real-world analogue creations, audio, sound, images, videos, animations, text, and the surrounding environment can be orchestrated to create sensorially rich, and more meaningful learning experiences (Gilakjani, et al., 2011; Philippe, et al., 2020; Sankey, et al., 2010). For example, mixed reality (XR) learning design combines immersive media forms to support multi-sensory and expanded cognitive learning (Philippe et al., 2020; Rakkolainen et al., 2021; Villalobos & Videla, 2023). Other relevant approaches include gamification and transmedia storytelling methods (Doumanis et al., 2019; Perry, 2020). By leveraging different modalities, educators can design learning materials that engage learners with different sensory activations and presentation methods (Bouchey et al., 2021). This approach can cater to the 4E+ view of cognition, and subsequently enhancing knowledge acquisition and retention. Examples from our own practice and research (such as the Explora: Chile es Mar, Pipi’s World and O-Tū-Kapua XR learning experiences), as well as current educational examples (Bouchey et al., 2021; Philippe, et al., 2020), demonstrate how multimodal media integration facilitates deeper engagement, critical thinking, and a more holistic understanding of complex concepts. Furthermore, we discuss practical strategies for educators to implement these principles in their TEL design, highlighting the potential of aligning multimodal design choices with the 4E+ cognitive framework. Ultimately, we advocate for a shift towards a more inclusive and effective approach to technology-enhanced learning - one that embraces the diversity of human cognitive processes and leverages multimodal media to communicate meaningful knowledge in ways that resonate with learners' cognitive structures and experiences. Multimodal methods, when aligned with the distributed 4E+ view of cognition, can make TEL appeal and resonate on deeper levels to engage across various sensory, environmental and communication modes. This type of approach acknowledges the diversity of ways that humans process and understand phenomena, and how more effective learning can occur when multiple ways of knowing are engaged and communicated to. Furthermore, through this method, inclusivity can be heightened for students with diverse cultural, neurological or other backgrounds (Anis & Khan, 2023; Boivin & CohenMiller, 2022). Emerging research shows the potential of the 4E+ approach to meet the needs of learning in 21st century technological environments (Videla & Veloz, 2023; Villalobos & Videla, 2023). This presentation contributes to the literature by examining TEL design through a multimodal media lens. It highlights how the holistic 4E+ framework can more effectively and meaningfully engage students than computational, monomodal and bimodal uses of technology in educational settings. References Anis, M., & Khan, R. (2023). Integrating Multimodal Approaches in English Language Teaching for Inclusive Education: A Pedagogical Exploration. Boivin, A. C. N., & CohenMiller, A. (2022). INCLUSION AND EQUITY WITH MULTIMODALITY DURING COVID-19. Keep Calm, Teach On: Education Responding to a Pandemic, 87. Bouchey, B., Castek, J., & Thygeson, J. (2021). Multimodal learning. Innovative Learning Environments in STEM Higher Education: Opportunities, Challenges, and Looking Forward, 35-54. Carney, J. (2020). Thinking avant la lettre: A Review of 4E Cognition. Evolutionary studies in imaginative culture, 4(1), 77-90. Doumanis, I., Economou, D., Sim, G. R., & Porter, S. (2019). The impact of multimodal collaborative virtual environments on learning: A gamified online debate. Computers & Education, 130, 121-138. Gilakjani, A. P., Ismail, H. N., & Ahmadi, S. M. (2011). The effect of multimodal learning models on language teaching and learning. Theory & Practice in Language Studies, 1(10). Jianhui, L. (2019). Transcranial Theory of Mind: A New Revolution of Cognitive Science. International Journal of Philosophy, 7(2), 66-71. Menary, R. (2010). Introduction to the special issue on 4E cognition. Phenomenology and the Cognitive Sciences, 9, 459-463. Newen, A., De Bruin, L., & Gallagher, S. (Eds.). (2018). The Oxford handbook of 4E cognition. Oxford University Press. Perry, M. S. (2020). Multimodal Engagement through a Transmedia Storytelling Project for Undergraduate Students. Gema Online Journal of Language Studies, 20(3). Philippe, S., Souchet, A. D., Lameras, P., Petridis, P., Caporal, J., Coldeboeuf, G., & Duzan, H. (2020). Multimodal teaching, learning and training in virtual reality: a review and case study. Virtual Reality & Intelligent Hardware, 2(5), 421-442. Rakkolainen, I., Farooq, A., Kangas, J., Hakulinen, J., Rantala, J., Turunen, M., & Raisamo, R. (2021). Technologies for multimodal interaction in extended reality—a scoping review. Multimodal Technologies and Interaction, 5(12), 81. Sankey, M., Birch, D., & Gardiner, M. W. (2010). Engaging students through multimodal learning environments: The journey continues. Proceedings of the 27th Australasian Society for Computers in Learning in Tertiary Education, 852-863. Videla, R., & Veloz, T. (2023). The 4E approach applied to education in the 21st century. Constructivist Foundations, 18(2), 153-157. Villalobos, M., & Videla, R. (2023). The roots and blossoms of 4E cognition in Chile: Introduction to the Special Issue on 4E cognition in Chile. Adaptive Behavior, 31(5), 397-404.
No abstract available
This article examines embodied interaction in a virtual reality learning environment. Studies of embodied interaction in immersive learning environments, like virtual reality, tend to treat all bodies the same without considering the nuanced cultural histories those bodies have with being mobile, especially within—and beyond—technology-mediated environments. In response, this study pivots from perspectives on embodied interaction that underscore the inextricable link between mind and body in favor of sociocultural perspectives to embodiment that emphasize the cultural-historical production of embodied interaction across space and over time. Through multimodal analysis of 10 learners’ experiences in a virtual reality experience called Thought for Food, this article contributes (1) an overt focus on the importance of feeling histories—embodied ways of sensing, feeling, and moving within digital environments—of learners engaging in virtual reality environments in order to promote equitable learning opportunities and (2) argues for future designs that are attuned to frictions—contestations between bodies and interfaces—that potentially collide with learners’ feeling histories.
No abstract available
Recent advancements in virtual reality (VR) technology have enabled the creation of immersive learning environments that provide engineering students with hands-on, interactive experiences. This paper presents a novel framework for virtual laboratory environments (VLEs) focused on embodied learning, specifically designed to teach concepts related to mechanical and materials engineering. Utilizing the principles of embodiment and congruency, these VR modules offer students the opportunity to engage physically with virtual specimens and machinery, thereby enhancing their understanding of complex topics through sensory immersion and kinesthetic interaction. Our framework employs an event-driven, directed-graph-based architecture developed with Unity 3D and C#, ensuring modularity and scalability. Students interact with the VR environment by performing tasks such as selecting and testing materials, which trigger various visual and haptic events to simulate real-world laboratory conditions. A pre-/post-test evaluation method was used to assess the educational effectiveness of these VR modules. Results demonstrated significant improvements in student comprehension and retention, with notable increases in test scores compared to traditional non-embodied VR methods. The implementation of these VLEs in a university setting highlighted their potential to democratize access to high-cost laboratory experiences, making engineering education more accessible and effective. By fostering a deeper connection between cognitive processes and physical actions, our VR framework not only enhances learning outcomes but also provides a template for future developments in VR-based education. Our study suggests that immersive VR environments can significantly improve the learning experience for engineering students.
No abstract available
This study aimed to investigate the effectiveness of embodied learning (EL) approaches in classrooms to improve students' learning effectiveness. A drama-based learning strategy was used to investigate the gap in creating an authentic learning environment for classroom learners and facilitating them to be situationally embodied in the learning context and perform hands-on activities. A total of 76 undergraduate-level students were randomly selected for either the experimental group ($\mathrm{n}=31$) or the control group ($\mathrm{n}=44$). The experimental group used the learning system to immerse and situationally embodied in a drama-based virtual scenario for EL activities where real-time speech, objects and images, body motion and gesture recognition techniques were provided to them as self-evaluation techniques while performing, and the control group used immersion technologies to situationally embody in a virtual fictional context and the teacher and peers did evaluation after the activity. Results showed an improvement in learning effectiveness for both groups, but the approach used for the experimental group outperformed the control group. This study impacts that EL approaches focusing on situatedness can enhance learning effectiveness and provide a useful guide for educational practitioners. Educators can facilitate the effective transfer of knowledge and skills to learners through engaging and immersive learning experiences.
No abstract available
The traditional animation art in the era of intelligence has gradually evolved from flash interactive animation and immersive animation to 4D new media interactive animation produced using lighting technology, physics engine technology, extended reality technology, and new media interaction technology. Based on the theory of embodied cognition, the dynamic system is used to make precise environmental calculations of the virtual and real symbiotic space generated by 3D animation and surrounding environment simulation. The audience interacts with the visual, auditory, tactile, olfactory, and gustatory senses of 4D new media animation while generating a five senses linkage, obtaining an excellent immersion and embodied synesthesia experience.
Spatial ability is an important skill for art students, and its learning difficulty lies in the students' need to form abstract three-dimensional thinking and spatial perception. Common digital learning media (DLM) consume many cognitive resources and result in a limited spatial ability for students to learn. Previous studies have shown that virtual reality (VR) technology has unique advantages in improving spatial ability and training design thinking. This study uses VR technology to design an immersive learning environment (ILE) and discusses the differences between students' learning performance and cognitive load in DLM mode based on slides and ILE mode based on VR technology. Twenty-eight first-year university students participated in the experiment, divided into control and experimental groups based on their entrance grades and gender. The student's learning performance and cognitive load were obtained through academic ability tests and questionnaires. The experimental results show that the main effect of the learning environment is significant. Students in ILE have lower cognitive load and higher learning performance, and gender does not significantly influence cognitive load and academic performance. However, DLM increases students' cognitive load, and the cognitive load of females is higher than that of males. The results of this study provide a reference for future spatial ability learning and the impact of cognitive load on learning performance while also supporting efforts towards sustainable development by promoting innovative educational approaches aligned with the Sustainable Development Goals (SDG).
This paper presents the design and evaluation of IMAGINE, a novel interactive immersive smart space for embodied learning. In IMAGINE children use full-body movements and gestures to interact with multimedia educational contents projected on the wall and on the floor, while synchronized light effects enhance immersivity. A controlled study performed at a primary school with 48 children aged 6-8 highlights the educational potential of an immersive embodied solution, also compared to traditional teaching methods, and draws some implications for smart-space technology adoption in educational contexts.
No abstract available
This paper is based on a case study that examined two types of interaction methods in Virtual Reality (VR) through observation. Participants were assigned to complete a design task using VR, where they engaged in either direct or indirect interaction methods. The direct interaction mimicked real-world actions, while the indirect interaction involved using a mediating user interface. Using the ‘Immersive Framework for UX and Learning in Immersive Technology for Learner Engagement’ as the theoretical foundation, this study analyzed how these two interaction methods influenced user experience, engagement, and educational outcomes. Participants were divided into two groups, each experiencing one of the interaction methods while designing an office space in VR. While direct interaction mimics real-world activities, enhanced physical engagement, and potentially intrinsic motivation, indirect interactions provide greater precision through a user interface, require more cognitive effort and affect usability and motivation differently. Based on these observations, the authors suggest that a combination of both interaction methods can create a balanced and effective learning environment. This approach supports hands-on learning in the initial stages and precision tasks in more advanced stages. The study offers insights aimed at guiding educators in selecting appropriate VR interaction methods to optimize educational content development and improve learner engagement and outcomes.
Interactive Machine Learning offers a method for designing movement interaction that supports creators in implementing even complex movement designs in their immersive applications by simply performing them with their bodies. We introduce a new tool, InteractML, and an accompanying ideation method, which makes movement interaction design faster, adaptable and accessible to creators of varying experience and backgrounds, such as artists, dancers and independent game developers. The tool is specifically tailored to non-experts as creators configure and train machine learning models via a node-based graph and VR interface, requiring minimal programming. We aim to democratise machine learning for movement interaction to be used in the development of a range of creative and immersive applications.
In this paper, we developed an immersive training environment for mastering the use of a fire extinguisher, with the possibility of receiving live feedback from a remote expert. To facilitate communication and knowledge sharing between trainees and remote experts, we provide bidirectional audio and event synchronization. As additional novel features, we allow the user to see and interact with their own body and real fire extinguisher using deep learning and color-based semantic segmentation. We also allow the expert to monitor the trainee by rendering a 2D video of their egocentric view. Preliminary results show the benefit of receiving live feedback and the potential of using a real fire extinguisher.
The present study used immersive virtual‐reality (iVR) technology to simulate a real‐life environment and examined its impact on novel‐word learning and lexicalization. On Days 1–3, Chinese‐speaking participants learned German words in iVR and traditional picture–word (PW) association contexts. A semantic‐priming task was used to measure word lexicalization on Day 4, and again 6 months later. The behavioral findings of an immediate posttest showed a larger semantic‐priming effect on iVR‐learned words compared to PW‐learned words. Moreover, electrophysiological results of the immediate posttest demonstrated significant semantic‐priming effects only for iVR‐learned words, such that related prime–target pairs elicited enhanced N400 amplitude compared to unrelated prime–target pairs. However, after 6 months, there were no differences between the iVR and PW conditions. The findings support the embodied‐cognition theory and dual‐coding theory and suggest that a virtual real‐life learning context with multimodal enrichment facilitates novel‐word learning and lexicalization but that these effects seem to disappear over time.
ABSTRACT As physical activity can significantly enhance learning in virtual reality (VR) environments, we hypothesize that in hands-on courses, where tasks require both cognitive and physical efforts, the effective design of embodied interactions may also improve students' learning outcomes. This study compares desktop virtual reality (DVR) and immersive virtual reality (IVR), examining the effects of a sense of embodiment on learning outcomes, learning experiences and cognitive load during skill acquisition. The findings confirm that IVR environments significantly increased students' sense of embodiment during the learning process. Although this increased embodiment did not result in a significant difference between IVR and DVR in terms of enhancing learning outcomes, a significant reduction in students' cognitive load was observed in IVR. IVR environments with limited textual content and consistent alignment between gestures and learning materials may influence cognitive load. Based on these findings, it is proposed that IVR can help students achieve learning outcomes comparable to those in DVR, but with reduced cognitive load, particularly in hands-on courses.
No abstract available
To what extent can language give rise to complex conceptual representation? Is multisensory experience essential? Recent large language models (LLMs) challenge the necessity of grounding for concept formation: whether LLMs without grounding nevertheless exhibit human-like representations. Here we compare multidimensional representations of ~4,442 lexical concepts between humans (the Glasgow Norms1, N = 829; and the Lancaster Norms2, N = 3,500) and state-of-the-art LLMs with and without visual learning, across non-sensorimotor, sensory and motor domains. We found that (1) the similarity between model and human representations decreases from non-sensorimotor to sensory domains and is minimal in motor domains, indicating a systematic divergence, and (2) models with visual learning exhibit enhanced similarity with human representations in visual-related dimensions. These results highlight the potential limitations of language in isolation for LLMs and that the integration of diverse modalities can potentially enhance alignment with human conceptual representation. Xu et al. find that large language models not only align with human representations in non-sensorimotor domains but also diverge in sensorimotor ones, with additional visual training associated with enhanced alignment.
Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how Large Language Models (LLMs) such as ChatGPT work (their vast text databases, statistics, vector representations, and huge number of parameters, next-word training, etc.). However, none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not true that it understands. But it is also not true that we understand how it can do what it can do. I will suggest some hunches about benign “biases”—convergent constraints that emerge at the LLM scale that may be helping ChatGPT do so much better than we would have expected. These biases are inherent in the nature of language itself, at the LLM scale, and they are closely linked to what it is that ChatGPT lacks, which is direct sensorimotor grounding to connect its words to their referents and its propositions to their meanings. These convergent biases are related to (1) the parasitism of indirect verbal grounding on direct sensorimotor grounding, (2) the circularity of verbal definition, (3) the “mirroring” of language production and comprehension, (4) iconicity in propositions at LLM scale, (5) computational counterparts of human “categorical perception” in category learning by neural nets, and perhaps also (6) a conjecture by Chomsky about the laws of thought. The exposition will be in the form of a dialogue with ChatGPT-4.
How do living organisms decide and act with limited and uncertain information? Here, we discuss two computational approaches to solving these challenging problems: a “cognitive” and a “sensorimotor” enrichment of stimuli, respectively. In both approaches, the key notion is that agents can strategically modulate their behavior in informative ways, e.g., to disambiguate amongst alternative hypotheses or to favor the perception of stimuli providing the information necessary to later act appropriately. We discuss how, despite their differences, both approaches appeal to the notion that actions must obey both epistemic (i.e., information-gathering or uncertainty-reducing) and pragmatic (i.e., goal- or reward-maximizing) imperatives and balance them. Our computationally-guided analysis reveals that epistemic behavior is fundamental to understanding several facets of cognitive processing, including perception, decision making, and social interaction.
No abstract available
A core function of intelligence is grounding, which is the process of connecting the natural language and abstract knowledge to the internal representation of the real world in an intelligent being, e.g., a human. Human cognition is grounded in our sensorimotor experiences in the external world and subjective feelings in our internal world. We use languages to communicate with each other and the languages are grounded on our shared sensorimotor experiences and feelings. Without this shard grounding, it is impossible for us to understand each other because all natural languages are highly abstract and are only able to describe a tiny portion of what has happened or is happening in the real world. Although grounding at high or abstract levels has been studied in different fields and applications, to our knowledge, limited systematic work at fine-grained levels has been done. With the rapid progress of large language models (LLMs), it is imperative that we have a sound understanding of grounding in order to move to the next level of intelligence. It is also believed that grounding is necessary for Artificial General Intelligence (AGI). This paper makes an attempt to systematically study this problem.
Abstract Much research in robotic artificial intelligence (AI) and Artificial Life has focused on autonomous agents as an embodied and situated approach to AI. Such systems are commonly viewed as overcoming many of the philosophical problems associated with traditional computationalist AI and cognitive science, such as the grounding problem (Harnad) or the lack of intentionality (Searle), because they have the physical and sensorimotor grounding that traditional AI was argued to lack. Robot lawn mowers and self-driving cars, for example, more or less reliably avoid obstacles, approach charging stations, and so on—and therefore might be considered to have some form of artificial intentionality or intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. For people encountering socially interactive systems, such as social robots or automated vehicles, this poses the nontrivial challenge to interpret them as intentional agents to understand and anticipate their behavior but also to keep in mind that the intentionality of artificial bodies is fundamentally different from their natural counterparts. This requires, on one hand, a “suspension of disbelief ” but, on the other hand, also a capacity for the “suspension of belief.” This dual nature of (attributed) artificial intentionality has been addressed only rather superficially in embodied AI and social robotics research. It is therefore argued that Bourgine and Varela’s notion of Artificial Life as the practice of autonomous systems needs to be complemented with a practice of socially interactive autonomous systems, guided by a better understanding of the differences between artificial and biological bodies and their implications in the context of social interactions between people and technology.
What happens in the brain when we learn? Ever since the foundational work of Cajal, the field has made numerous discoveries as to how experience could change the structure and function of individual synapses. However, more recent advances have highlighted the need for understanding learning in terms of complex interactions between populations of neurons and synapses. How should one think about learning at such a macroscopic level? Here, we develop a conceptual framework to bridge the gap between the different scales at which learning operates, from synapses to neurons to behavior. Using this framework, we explore the principles that guide sensorimotor learning across these scales, and set the stage for future experimental and theoretical work in the field.
No abstract available
One of the most controversial debates in cognitive neuroscience concerns the cortical locus of semantic knowledge and processing in the human brain. Experimental data revealed the existence of various cortical regions relevant for meaning processing, ranging from semantic hubs generally involved in semantic processing to modality-preferential sensorimotor areas involved in the processing of specific conceptual categories. Why and how the brain uses such complex organization for conceptualization can be investigated using biologically constrained neurocomputational models. Here, we improve pre-existing neurocomputational models of semantics by incorporating spiking neurons and a rich connectivity structure between the model ‘areas’ to mimic important features of the underlying neural substrate. Semantic learning and symbol grounding in action and perception were simulated by associative learning between co-activated neuron populations in frontal, temporal and occipital areas. As a result of Hebbian learning of the correlation structure of symbol, perception and action information, distributed cell assembly circuits emerged across various cortices of the network. These semantic circuits showed category-specific topographical distributions, reaching into motor and visual areas for action- and visually-related words, respectively. All types of semantic circuits included large numbers of neurons in multimodal connector hub areas, which is explained by cortical connectivity structure and the resultant convergence of phonological and semantic information on these zones. Importantly, these semantic hub areas exhibited some category-specificity, which was less pronounced than that observed in primary and secondary modality-preferential cortices. The present neurocomputational model integrates seemingly divergent experimental results about conceptualization and explains both semantic hubs and category-specific areas as an emergent process causally determined by two major factors: neuroanatomical connectivity structure and correlated neuronal activation during language learning.
No abstract available
Wearables pervade many facets of human endeavor, thanks to their integration into everyday artifacts and activities. From fitness bands to medical patches, to augmented reality glasses, wearables have demonstrated immense potential for intelligence augmentation (IA) through human-machine symbiosis. To advance an understanding of how wearables engender IA and to provide a solid foundation for grounding IS research on wearables and IA, this study draws from Engelbart’s framework for augmenting human intellect to: (1) develop a conceptual definition of wearable technology as a digitally enhanced body-borne device that can augment a human or non-human capability by affording context sensitivity, mobility, hands-free interaction, and constancy of operation , (2) extend Engelbart’s framework to the sociomaterial domain to account for the emergence of augmented capabilities that are neither wholly social nor wholly material, and (3) propose and elaborate four augmentation pathways — complementation, supplementation, mediation, and mutual constitution — to facilitate IA research.
In human-machine communication situations, perceptual and conceptual deviations can appear. The challenge of categorising colours is tackled in this paper. Colour perception is very subjective. Colours may be perceived differently depending on a person’s eye anatomy and a person’s sense of sight which adapts to the surroundings and perceives different brightness of hues depending on the context. Distinguishing more/less quantity of hues depends also on the level of expertise but also on the cultural and social environment. Colours naming involves conceptual alignment with human cognition, meaning and human understanding for referring to an object and even for discriminating among objects. Studies in cross-cultural linguistics say that humans determined prototypical colours as the centre of colour categories. Hence, a cognitive colour model should distinguish/indicate when a colour coordinate is close/far to the centre of its category. And these centres of categories should be adaptable and customisable depending on the society. A fuzzy colour model based on HSL colour space and radial basis functions is presented in this paper. Logics have been defined to combine this fuzzy-colour model with a Probabilistic Reference And GRounding mechanism (PRAGR) in order to obtain the most discriminative colour descriptor for an object depending on the context. Two case studies related with human cognition are presented. Then further tests are carried out on a dataset where the first and second most discriminative colour is computed for each object in each scene. Finally, a survey is conducted to find out the cognitive adequacy of the obtained discriminative colour names.
Attention is a complex and broad concept, studied across multiple disciplines spanning artificial intelligence, cognitive science, psychology, neuroscience, and related fields. Although many of the ideas regarding attention do not significantly overlap among these fields, there is a common theme of adaptive control of limited resources. In this work, we review the concept and variants of attention in artificial neural networks (ANNs). We also discuss the origin of attention from the neuroscience point of view parallel to that of ANNs. Instead of having seemingly disconnected dialogues between varied disciplines, we suggest grounding the ideas on common conceptual frameworks for a systematic analysis of attention and towards possible unification of ideas in AI and Neuroscience.
We describe a new introductory CS curriculum initiative that uses analogies and active engagement to develop students' conceptual understanding before applying the concepts to programming. We believe that traditional coding approaches to introducing computer science concepts rely on students to build their own conceptual understanding, rather than grounding their understanding of concepts in what they know from everyday experiences. Using constructivism as a foundation for this curriculum initiative, our approach builds a framework for student understanding anchored in the physical world using simple games and stories to stimulate mental engagement through embodied learning. For example, we teach the concept of abstraction and representation by presenting the game of Tic-Tac-Toe as an island divided into nine regions, but the middle one you cannot get to by boat, which is the way two teams arrive to the island. After playing the game once and realizing the game is really just Tic-Tac-Toe, the students understand the example is a representation with modified rules and game pieces. Then we talk about how the set of rules for a simple game like Tic-Tac-Toe is an algorithm with instructions for how to play the game, and we use playing the game to explain computation as the execution of an algorithm. Based on observations using analogies and active engagement in 6th grade classrooms, we provide many examples explaining how this curriculum initiative is an engaging, effective, and flexible approach for introducing CS concepts.
Developmental autonomous behavior refers to the general ability of a machine to acquire new skills and behavior from its birth to maturity on its own without human intervention. This article describes the principles of behavior development in machines, providing a practical framework to analyze and synthesize machines with developmental capabilities. Inspired by biological views of behavioral causation, the work emphasizes principled explanations to, not only the “how” question on mechanisms but also the “why” question on causation of behavior development. This ethology-oriented perspective offers a renewed opportunity to construct a theoretical framework from the ground up, overcoming the age-old problems of intrinsic motivation and symbol emergence in autonomous machines. One of the key contributions of this article is the logical explanation of why and how value systems drive successive development of memory functions, resulting in progressive changes in behavior from innate reflexive to episodic, procedural, and autonomic behavior. Another notable contribution is the logical and plausible explanation of why and how a physical sensorimotor system becomes a symbol processor, fostering conceptual and social behavior development. This article provides an extensive review of prior research, followed by detailed descriptions of the causality and mechanisms of behavior development, and concludes with discussions on criticism, future work, ethics, and system architecture.
Understanding how the human brain generates, utilizes, and adapts to technology is one of our most urgent scientific questions today. Recent advances in cognitive neuroscience reveal a complex neurocognitive structure that underpins human interaction with technology. Here, we propose an integrated framework that considers the interplay of causal reasoning, semantic cognition, visuospatial skills, sensorimotor knowledge, and social learning in shaping our technological abilities. Drawing on neuroimaging, lesion studies, and evolutionary evidence, we identify key brain regions that act as specialized processors and integrative hubs within a distributed network supporting 'technological cognition.' We argue that different categories of technologies - mechanical versus digital - activate separate neural subsystems, reflecting their diverse cognitive demands. Ultimately, we situate technological cognition within the broader concepts of embodied cognition and extended mind theories, suggesting that technology can expand human mental capacities and actively influence the structure and functioning of the mind itself. This framework advocates for an interdisciplinary approach to deepen our understanding of how technology influences and integrates with human cognition.
Artificial intelligence advances have led to robots endowed with increasingly sophisticated social abilities. These machines speak to our innate desire to perceive social cues in the environment, as well as the promise of robots enhancing our daily lives. However, a strong mismatch still exists between our expectations and the reality of social robots. We argue that careful delineation of the neurocognitive mechanisms supporting human-robot interaction will enable us to gather insights critical for optimising social encounters between humans and robots. To achieve this, the field must incorporate human neuroscience tools including mobile neuroimaging to explore long-term, embodied human-robot interaction in situ. New analytical neuroimaging approaches will enable characterisation of social cognition representations on a finer scale using sensitive and appropriate categorical comparisons (human, animal, tool, or object). The future of social robotics is undeniably exciting, and insights from human neuroscience research will bring us closer to interacting and collaborating with socially sophisticated robots.
No abstract available
This study investigates the neuro-ecological dynamics of digital second language acquisition (SLA), focusing on the interaction between learners’ cognitive processes and technologymediated environments. Drawing on cognitive neuroscience, ecological psychology, and SLA research, we conceptualize digital language learning as a co-adaptive system, where neural mechanisms and environmental affordances continuously influence one another. Through a synthesis of neuroimaging, behavioral, and ecological data, we show that multimodal digital tools such as immersive simulations, gesture-based interactions, and adaptive feedback enhance neuroplasticity and support embodied cognition. The findings highlight that digital SLA extends beyond traditional cognitive processes, engaging sensorimotor, attentional, and emotional networks in dynamic coupling with the digital learning environment. Based on these insights, the paper proposes design principles for neuro-ecologically informed digital learning tools and identifies avenues for future research integrating neurocognitive, ecological, and learning analytics approaches. These results have implications for developing more effective, learnercentered digital language environments that align with the brain’s adaptive capacities.
Embodied Space in Natural and Virtual Environments: Implications for Cognitive Neuroscience Research
No abstract available
Neurocognitive research is pertinent to developing mechanistic models of how humans generate creative thoughts. Such models usually overlook the role of the motor cortex in creative thinking. The framework of embodied or grounded cognition suggests that creative thoughts (e.g. using a shoe as a hammer, improvising a piano solo) are partially served by simulations of motor activity associated with tools and their use. The major hypothesis stemming from the embodied or grounded account is that, while the motor system is used to execute actions, simulations within this system also support higher-order cognition, creativity included. That is, the cognitive process of generating creative output, not just executing it, is deeply embedded in motor processes. Here, we highlight a collection of neuroimaging research that implicates the motor system in generating creative thoughts, including some evidence for its functionally necessary role in generating creative output. Specifically, the grounded or embodied framework suggests that generating creative output may, in part, rely on motor simulations of possible actions, and that these simulations may by partially implemented in the motor regions themselves. In such cases, action simulations (i.e. reactivating or re-using the motor system), do not result in overt action but instead are used to support higher-order cognitive goals like generating creative uses or improvising.
The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit—an approximation of PPS—while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus–observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary—an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.
No abstract available
No abstract available
As augmented reality (AR) technology becomes more prevalent in marketing, its impact on consumer memory processes, particularly brand recall, remains underexplored. This study primarily employs an empirical research method, using two lab experiments to examine the interaction between product presentation formats (AR vs. non-AR) and presentation strategies (separate vs. collocation) in brand recall. Across two experiments, we showed that AR enhances brand recall only in collocation presentations, where multiple products are displayed together, but not in separate presentations of individual products. In single-product contexts, AR formats do not demonstrate a significant advantage over non-AR formats. These findings suggest that AR’s effectiveness is contingent on presentation strategy, highlighting the contextual boundaries of AR’s utility in influencing consumer memory. By integrating embodied cognition theory with associative network theory, this research advances our understanding of how immersive technologies shape brand recall, offering strategic insights for marketers seeking to leverage AR in diverse product presentation scenarios.
Embodied Cognition (EC) encompasses the notion that the body shapes cognition. Fundamental to this, are the ideas of image-schemas and conceptualisation. These concepts posit that abstract thought is a sort of metaphor grounded in some phenomenological body-environment interaction. The intuitive environment that we exist in can be described using Euclidean geometry. Recently, Virtual Reality (VR) has been used to successfully simulate non-Euclidean environments. These are geometric spaces that are non-intuitive, and are counter to the Euclidean space that we perceive in our day-to-day life. Hence, building on the idea that Human Computer Interaction (HCI) can be used to experimentally test ideas in philosophy of mind, we look to explore how image-schemas and conceptualisation apply to immersive, non-Euclidean environments, simulated using VR.
This study explores the application of embodied cognition theory in the immersive experience design of archaeological site museums in China, aiming to enhance visitors' understanding of history and culture through digital technologies. Embodied cognition emphasizes the interaction between the body, senses, and environment, strengthening immersion and interactivity. The research analyzes the design of multi-sensory immersive spaces and the use of digital technologies, and through the dimensions of embodied cognition, cognitive, body, environment, and integration, reveals how immersive design promotes cultural identity, engagement, and emotional resonance. However, the sample size was limited, and future research should expand the scope, integrate diverse data, and explore the convergence of traditional and digital technologies as well as the application of emerging technologies.
合并后的分组构建了一个从理论底层到技术中台再到应用终端的完整具身学习研究体系。首先,理论组夯实了4E认知与神经机制的基础;其次,XR技术组与具身智能组分别从环境构建与智能交互两个维度提供了技术实现路径;手势与动作组深入剖析了身体经验转化为认知能力的微观过程;最后,跨学科应用组展示了具身学习在教育公平、医疗康复及文化传承中的广泛社会价值。整体趋势显示,具身学习正从单一的心理学概念演变为一种整合AI、XR与多模态交互的跨学科教育范式。