智能座舱hmi设计研究
多模态融合与自然交互技术研究
该组文献探讨语音(含大模型LLM)、手势、触控(含预测性与力反馈)、音频、足部感知及XR(AR/VR)等多种模态的融合策略,旨在通过冗余消除和协同设计提高座舱交互的自然度、效率与沉浸感。
- Developing a Multimodal HMI Design Framework for Automotive Wellness in Autonomous Vehicles(Yaqi Zheng, X. Ren, 2022, Multimodal Technol. Interact.)
- Enhancing Interactions for In-Car Voice User Interface with Gestural Input on the Steering Wheel(Zhitong Cui, Hebo Gong, Yanan Wang, Chengyi Shen, Wenyin Zou, Shijian Luo, 2021, 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Application of Multi-Modal Interaction Based on Augmented Reality to Enhance the Environment Perception and Cabin Experience of Intelligent Car(Gang Wu, 2025, Proceedings of the 2nd International Conference on Engineering Management, Information Technology and Intelligence)
- Research on Intelligent Vehicle Cockpit Design Based on Multimodal Human-Computer Interaction Technology(Feng Xin, Ge Zhang, Yubo Huang, 2024, Proceedings of the 2024 International Conference on Digital Society and Artificial Intelligence)
- Real-time hand posture and gesture-based touchless automotive user interface using deep learning(V. John, Makoto Umetsu, Ali Boyali, Seiichi Mita, Masayuki Imanishi, Norio Sanma, S. Shibata, 2017, 2017 IEEE Intelligent Vehicles Symposium (IV))
- Intelligent Cockpit Application Based on Artificial Intelligence Voice Interaction System(Desen Qu, 2024, Comput. Informatics)
- Next-Generation Digital Twin UX: IoT-Driven Smart and Interactive Design(Anwar ALI SATHIO, Muhammad MALOOK RIND, Sameer Ali, 2025, Journal on Informatics Visualization and Social Computing)
- Touchless Selection Schemes for Intelligent Automotive User Interfaces With Predictive Mid-Air Touch(B. I. Ahmad, Chrisminder Hare, Harpreet Singh, A. Shabani, Briana Lindsay, L. Skrypchuk, P. Langdon, S. Godsill, 2019, Int. J. Mob. Hum. Comput. Interact.)
- Gesture Control for HMI and Usability(A. Tsagaris, 2025, 2025 25th International Conference on Control, Automation and Systems (ICCAS))
- Exploring the Use of Mid-Air Ultrasonic Feedback to Enhance Automotive User Interfaces(Kyle Harrington, D. Large, G. Burnett, Orestis Georgiou, 2018, Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Comparing Electrostatic and Vibrotactile Feedback for In-Car Touchscreen Interaction using common User Interface Controls(A. Farooq, R. Raisamo, Jani Lylykangas, O. Špakov, Veikko Surakka, 2023, Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy)
- Where's My Button? Evaluating the User Experience of Surface Haptics in Featureless Automotive User Interfaces(S. Breitschaft, A. Pastukhov, Claude Carbón, 2021, IEEE Transactions on Haptics)
- 31‐4: Sharp Force Touch for On‐Screen User Interface in LCD and Foldable OLED Display Application(Takuma Yamamoto, T. Maruyama, Kazutoshi Kida, Shinji Yamagishi, Yasuhiro Sugita, H. Fukushima, Mikihiro Noma, 2023, SID Symposium Digest of Technical Papers)
- MinMo: A Multimodal Large Language Model for Seamless Voice Interaction(Qian Chen, Yafeng Chen, Yanni Chen, Mengzhe Chen, Yingda Chen, Chong Deng, Zhihao Du, Ruize Gao, Changfeng Gao, Zhifu Gao, Yabin Li, Xiang Lv, Jiaqing Liu, Haoneng Luo, Bin Ma, Chongjia Ni, Xiangling Shi, Jialong Tang, Hui Wang, Hao Wang, Wen Wang, Yuxuan Wang, Yun-Xin Xu, F. Yu, Zhijie Yan, Yexin Yang, Baosong Yang, Xiangyun Yang, Guan Yang, Tianyu Zhao, Qingling Zhang, Shiliang Zhang, Nan Zhao, Pei Zhang, Chong Zhang, Jin-An Zhou, 2025, ArXiv)
- Integrating AI into Multimodal Automotive Design: A Conceptual Framework for User Experience Evaluation and Market Application(Liyuan Mu, Sharfika Raine, Zhehan Shi, 2026, Journal of Digitainability, Realism & Mastery (DREAM))
- MH-Pen: A Pen-Type Multi-Mode Haptic Interface for Touch Screens Interaction(Dapeng Chen, Aiguo Song, Lei Tian, Yuqing Yu, Lifeng Zhu, 2018, IEEE Transactions on Haptics)
- SiAM - Situation-Adaptive Multimodal Interaction for Innovative Mobility Concepts of the Future(Monika Mitrevska, M. Moniri, Robert Neßelrath, Tim Schwartz, Michael Feld, Yannick Körber, Matthieu Deru, Christian A. Müller, 2015, 2015 International Conference on Intelligent Environments)
- A Lightweight Dynamic Gesture Recognition Network for Advanced Driver Assistance Systems(Chen Sang, Sihan Gao, Xingwang Zhang, Haixin Zhang, Zhekang Dong, Yi Chen, Junfan Wang, 2026, IEEE Internet of Things Journal)
- Evaluating secondary input devices to support an automotive touchscreen HMI: A cross-cultural simulator study conducted in the UK and China.(D. Large, G. Burnett, E. Crundall, Glyn Lawson, L. Skrypchuk, Alex Mouzakitis, 2019, Applied ergonomics)
- Embedded Large Language Models for Enhanced Human-Machine Interface in Autonomous Vehicles(Sandhya Devi R S, S. D. Varshni, 2025, 2025 International Conference on Multi-Agent Systems for Collaborative Intelligence (ICMSCI))
- Immersive Audio HMI to Improve Situational Awareness(Alexander van Laack, Axel Torschmied, Gert-Dieter Tuzar, 2015, No journal)
- HMD-based virtual multi-screen control system and its gesture interface(S. Yoon, Hanjoo Cho, S. Cho, Young Hwan Kim, 2017, 2017 International SoC Design Conference (ISOCC))
- Towards spatial computing: recent advances in multimodal natural interaction for Extended Reality headsets(Zhimin Wang, M. Rao, Shanghua Ye, Weitao Song, Feng Lu, 2025, Frontiers of Computer Science)
- Mixed Reality-Based Platform for Smart Cockpit Design and User Study for Self-driving Vehicles(Xiaohua Sun, Shiyu Wu, Shengchen Zhang, Hanlin Wang, 2019, No journal)
- A Study of Several Types of “Interaction” of Man-Machine Interface with a Multi-screen View(Quanyi Zhao, Chen Xin, Zhenzhen Wei, 2010, 2010 Second International Conference on Intelligent Human-Machine Systems and Cybernetics)
- Cross-Reality UX Design Framework for AR/VR-Based Digital Twin Environments(E. Hong, 2025, Korea Institute of Design Research Society)
- Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback(Stefan Auer, Christoph Anthes, H. Reiterer, Hans-Christian Jetter, 2023, Proceedings of the ACM on Human-Computer Interaction)
- Design and Research of Flexible Bench for Intelligent Cockpit Human-computer Interaction for Digital Product Development and Verification(Lei Wang, Baocheng Zheng, Youjun Pang, 2022, Proceedings of the 3rd Asia-Pacific Conference on Image Processing, Electronics and Computers)
情感感知、心理监测与主动式共情交互
聚焦于驾驶员及乘客的情绪识别(愤怒、压力、疲劳)与生理状态监测,探讨基于深度学习的情感调节机制及主动干预手段,推动座舱从被动工具向具备共情能力的智能空间演进。
- Research on the Emotional Recognition and Interactive Influence Mechanism of the Main and Co-Pilots(Qiuyue Wang, 2025, Proceedings of the 2nd International Conference on Engineering Management, Information Technology and Intelligence)
- Human-Ai Cooperative Driving Through Emotion-Aware Decision Making and Driver Personalization(L.H.N. Y. Wickramasuriya, D. Rathnayaka, J. Walpalage, S. Rathnayake, 2025, 2025 7th International Conference on Advancements in Computing (ICAC))
- Driver's Personal Emotion Recognition for Intelligent Cockpit of New Energy Vehicles(Yuhong Chen, 2024, Computer Fraud and Security)
- Intelligent Cockpit for Intelligent Vehicle in Metaverse: A Case Study of Empathetic Auditory Regulation of Human Emotion(Wenbo Li, Lei Wu, Cong Wang, Jiyong Xue, Wensi S. Hu, Shen Li, Gang Guo, Dongpu Cao, 2023, IEEE Transactions on Systems, Man, and Cybernetics: Systems)
- Driver Behavior Analysis and Warning System for Digital Cockpit Based on Driving Data(Jin-Kyu Choi, Young-Jin Kwon, Kyong-ho Kim, J. Jeon, Byungtae Jang, 2019, 2019 International Conference on Information and Communication Technology Convergence (ICTC))
- Research on HMI Interaction Design Scheme Based on Multimodal(Jingyi Cui, Ying Wang, H. Xiao, 2024, 2024 International Conference on Electronics and Devices, Computational Science (ICEDCS))
- Research Progress on Deep Learning-Based Emotion Recognition and State Monitoring in Intelligent Cockpits(Pengyu Chen, Wenzhu Yang, Ziwen Wang, 2025, Science and Technology of Engineering, Chemistry and Environmental Protection)
- ·AI-enabled intelligent cockpit proactive affective interaction: middle-level feature fusion dual-branch deep learning network for driver emotion recognition(Yingzhang Wu, Wenbo Li, Yujing Liu, Guanzhong Zeng, Chengmou Li, Hua-Min Jin, Shen Li, Gang Guo, 2024, Advances in Manufacturing)
- MDEmoNet: A Multimodal Driver Emotion Recognition Network for Smart Cockpit(Chenhao Hu, Shenyu Gu, Mengjie Yang, Gang Han, Chun Sing Lai, Mingyu Gao, Zhexun Yang, Guojin Ma, 2024, 2024 IEEE International Conference on Consumer Electronics (ICCE))
- Multimodal human interaction analysis in vehicle cockpit(Quentin Portes, J. Pinquier, F. Lerasle, José Mendès Carvalho, 2021, 2021 IEEE International Intelligent Transportation Systems Conference (ITSC))
- A complete in‐cabin monitoring framework for autonomous vehicles in public transportation(Dimitris Tsiktsiris, Antonios Lalas, M. Dasygenis, K. Votis, 2025, IET Intelligent Transport Systems)
自动驾驶协同、接管信任与行车安全增强
研究L3级及以上自动驾驶环境下的HMI挑战,重点关注接管(Takeover)行为、分心管控、预警系统设计(P2V/CV)、人机信任建立及虚拟形象(Avatar)在协同驾驶中的作用,平衡信息丰富度与安全性。
- Optimization of human-machine interface for fatigue driving problem(Jin Wei, Zhen Sun, Hanyu Chen, 2024, Applied and Computational Engineering)
- The effect of human-machine interface modality, specificity, and timing on driver performance and behavior while using vehicle automation.(Meng Wang, Jah'inaya Parker, Nicholas Wong, Shashank Mehrotra, Shannon C. Roberts, Woon Kim, Alicia Romo, W. Horrey, 2024, Accident; analysis and prevention)
- In-Vehicle Human Machine Interface: Investigating the Effects of Tactile Displays on Information Presentation in Automated Vehicles(Kimberly D. Martinez, Gaojian Huang, 2022, IEEE Access)
- The Impact of Transparency on Driver Trust and Reliance in Highly Automated Driving: Presenting Appropriate Transparency in Automotive HMI(Jue Li, Jiawen Liu, Xiaoshan Wang, Long Liu, 2024, Applied Sciences)
- Quantitative driving safety assessment using interaction design benchmarking(A. Gaffar, Shokoufeh Monjezi Kouchak, 2017, 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI))
- Design of a High-Level Guidance User Interface for Teleoperation of Autonomous Vehicles(Felix Tener, J. Lanir, 2023, Adjunct Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Parallel Orientation Assistant, a Vehicle System Based on Voice Interaction and Multi-screen Interaction(Nan Jiang, Zhiyong Fu, 2019, No journal)
- The impact of drowsiness on in-vehicle human-machine interaction with head-up and head-down displays(David Grogna, Kristina Stojmenova, G. Jakus, Miguel Barreda- Ángeles, J. Verly, J. Sodnik, 2018, Multimedia Tools and Applications)
- Investigating the Effects of Pedestrian-to-Vehicle Human–Machine Interface Design Using Driving Simulator Experiment(M. Abdel-Aty, Lishengsa Yue, Yina Wu, Ou Zheng, 2022, Transportation Research Record)
- Graphical User Interface for intelligent automotive with vehicle to vehicle communication and adaptive light controls using image processing and machine learning(R. Jose, J. Mathew, G. Devadhas, Mary Synthia Regis Prabha D M, Shinu Mm, Dhanoj M, 2022, 2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT))
- Using Sound to Reduce Visual Distraction from In-vehicle Human–Machine Interfaces(P. Larsson, M. Niemand, 2015, Traffic Injury Prevention)
- Distraction of Connected Vehicle Human–Machine Interface for Truck Drivers(Guangchuan Yang, Mohamed M. Ahmed, Biraj Subedi, 2020, Transportation Research Record)
- Driver Monitoring-Based Lane-Change Prediction: A Personalized Federated Learning Framework(Runjia Du, Kyungtae Han, Rohit Gupta, Sikai Chen, S. Labi, Ziran Wang, 2023, 2023 IEEE Intelligent Vehicles Symposium (IV))
- The Impact of Content Temporality and Modality in Automotive User Interface on Trust and Comfort(Nadia Fereydooni, Sidney T. Scott-Sharoni, Bruce N. Walker, John K. Lenneman, Benjamin P. Austin, Takeshi Yoshida, 2023, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- The Effects of a Predictive HMI and Different Transition Frequencies on Acceptance, Workload, Usability, and Gaze Behavior during Urban Automated Driving(T. Hecht, S. Kratzert, K. Bengler, 2020, Inf.)
- Conceptual Design of Virtual Avatars Based on Intelligent Human-Machine Interface in Autonomous Driving(Runting Tang, Jianmin Wang, 2024, 2024 5th International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI))
- From HMI to HRI: Human-Vehicle Interaction Design for Smart Cockpit(Xiaohua Sun, Honggao Chen, Jintian Shi, Weiwei Guo, Jingcheng Li, 2018, No journal)
- Towards Virtualization Concepts for Novel Automotive HMI Systems(Simon Gansel, Stephan Schnitzer, Frank Dürr, K. Rothermel, Christian Maihöfer, 2013, No journal)
- Assessment of Drivers’ Perceptions of Connected Vehicle–Human Machine Interface for Driving Under Adverse Weather Conditions: Preliminary Findings From Wyoming(Mohamed M. Ahmed, Guangchuan Yang, Sherif M. Gaweesh, 2020, Frontiers in Psychology)
- Effect of Human–Machine Interface of a Vehicle on Right-Turn Maneuver at Intersections using a Driving Simulator(Yuta Kusakari, S. Oikawa, Yasuhiro Matsui, N. Kubota, 2021, 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC))
- Frequently Used Vehicle Controls While Driving: A Real-World Driving Study Assessing Internal Human–Machine Interface Task Frequencies and Influencing Factors(Ilse M. Harms, D. Auerbach, Eleonora Papadimitriou, Marjan Hagenzieker, 2025, Applied Sciences)
- 7th Workshop "Automotive HMI": Safety meets User Experience (UX)(A. Riener, Stefan Geisler, Alexander van Laack, Anna-Katharina Frison, Henrik Detjen, Bastian Pfleging, 2018)
- Research on interactive design strategy of home intelligent cockpit in nonlinear driving scenarios(Fang You, Yuqing Jiang, Yuchen Wang, Siqi Pan, Qianwen Fu, 2025, Traffic Injury Prevention)
- Theater-system Technique and Model-based Attention Prediction for the Early Automotive HMI Design Evaluation(S. Feuerstack, Bertram Wortelen, C. Kettwich, Anna Schieben, 2016, Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
用户中心设计(UCD)方法论与UX量化评价体系
致力于建立智能座舱的评价框架,结合模糊层次分析(Fuzzy AHP)、Kano模型、眼动/脑电指标等,研究针对特定人群(老年人、视障人士)及跨文化市场的参与式设计方法论。
- User Experience and Fuzzy Evaluation for Intelligent Cockpit(Peiyao Wang, Xuhua Shi, Hongzan Xu, Gaoran Zhang, Xiaojun Tang, 2024, 2024 5th International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI))
- Multi-modal Human-machine Interaction Evaluation of Intelligent Cockpit Based on Ergonomics(Quan Yuan, Feixiang Tian, Hongzhuan Zhao, Qingchao Liu, Yijie Tang, Zhong-Lin Lu, Bingxin Chen, Tao Wang, 2025, 2025 8th International Conference on Transportation Information and Safety (ICTIS))
- Research on Quantitative Evaluation Technology for Intelligent Cabin(Sheng Zhou, Wei Zhou, Xin Geng, Lu Liu, Xuan Dong, 2023, 2023 International Conference on Intelligent Computing, Communication & Convergence (ICI3C))
- Development and Evaluation Study of Intelligent Cockpit in the Age of Large Models(Jun Ma, Meng Wang, Jinhui Pang, Haofen Wang, Xuejing Feng, Zhipeng Hu, Zhenyu Yang, Ming Guo, Zhenmin Liu, Junwei Wang, Siyi Lu, Zhiming Gou, 2024, ArXiv)
- Investigating the Effects of Human–Machine Interface on Cooperative Driving Using a Multi-Driver Co-Simulation Platform(Zijin Wang, M. Abdel-Aty, Lishengsa Yue, Jiahao Zhu, Ou Zheng, Mohamed H. Zaki, 2024, IEEE Transactions on Intelligent Vehicles)
- User-Centered Challenges and Strategic Opportunities in Automotive UX: A Mixed-Methods Analysis of User-Generated Content(Tobias Mohr, Christian Winkler, 2025, Applied Sciences)
- Involving users in Automotive HMI design: Design evaluation of an interactive simulation based on participatory design(Duc Hai Le, Klas Ihme, F. Köster, 2023, Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy)
- Using Personas with Visual Impairments to Explore the Design of an Accessible Self-Driving Vehicle Human-Machine Interface(Julian Brinkley, 2021, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- Participatory Design in the Classroom: Exploring the Design of an Autonomous Vehicle Human-Machine Interface with a Visually Impaired Co-Designer(Earl W. Huff, Kathryn M. Lucaites, Aminah Roberts, Julian Brinkley, 2020, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- Automotive HMI design and participatory user involvement: review and perspectives(M. François, F. Osiurak, Alexandra Fort, Philippe Crave, Jordan Navarro, 2017, Ergonomics)
- Steering UX Education: Designing an Automotive UX Course(James Rampton, L. Robert, Myounghoon Jeon, Manhua Wang, Gayoung Ban, Ankit R. Patel, Dave Miller, 2024, Adjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Combined ergonomics evaluation based on touch HMI of airborne-mission-system(Guo-Jie Qin, Xu Xiao, Wei Qiao, 2023, 2023 International Conference on Computer Applications Technology (CCAT))
- An automotive human-machine interface design method integrating Fuzzy Kano-QFD and physiological data.(Xinhao Sun, Xiurong Guo, Yanlin Zhang, D. Du, 2025, Ergonomics)
- Exploring Usability Evaluation Method of Interconnection Task Between Extended Equipment and Intelligent Cockpit(Wenlong Yu, Xinyi Li, Xinyuan Li, Chengmou Li, Jin Xie, Gang Guo, Wenbo Li, 2024, 2024 8th CAA International Conference on Vehicular Control and Intelligence (CVCI))
- Investigating emotional design of the intelligent cockpit based on visual sequence data and improved LSTM(Nanyi Wang, Di Shi, Zengrui Li, Pingting Chen, Xipei Ren, 2024, Adv. Eng. Informatics)
- A cognitive load assessment method for fighter cockpit human-machine interface based on integrated multi-criteria decision making(Huining Pei, Ziyu Wang, Jingru Cao, Yunfeng Chen, Zhonghang Bai, 2024, Appl. Soft Comput.)
- UX Evaluation of a Tractor Cabin Digital Twin Using Mixed Reality(Sara Cavallaro, E. Prati, Fabio Grandi, Giancarlo Mangia, M. Pellicciari, M. Peruzzini, 2022, No journal)
- Navigating from data-driven design to designing with ML: a case study of truck HMI system design(Yi Luo, Dimitrios Gkouskos, Nancy L. Russo, Minjuan Wang, 2024, Proceedings of the Design Society)
- How to Map Cultural Dimensions to Usability Criteria: Implications for the Design of an Automotive Human-Machine Interface(Denise Sogemeier, Yannick Forster, Frederik Naujoks, J. Krems, Andreas Keinath, 2022, Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Analysis on Evaluation and Design of Intelligent Cockpit in Automobiles(Yanchen Fu, 2024, 2024 International Conference on Electrical Drives, Power Electronics & Engineering (EDPEE))
- An Integrated Assessment Model of Automobile Smart Cabin Comfort Based on Weight Optimization(Wenjun Liao, Xukang Liu, Jianjun Yang, Yikang Li, Xiangqun Liu, Jinghui Ma, Hongbo Shi, 2025, International Journal of Automotive Technology)
- IntelliCockpitBench: A Comprehensive Benchmark to Evaluate VLMs for Intelligent Cockpit(Liang Lin, Siyuan Chai, Jiahao Wu, Hongbing Hu, Xiaotao Gu, Hao Hu, Fan Zhang, Wei Wang, Dan Zhang, 2025, No journal)
- Co-creation Design Research of Intelligent Cockpit HMI Based on Robot Personality in Dangerous Driving Scenarios(Yuqing Jiang, Qianwen Fu, Siqi Pan, Yaoyun Huang, Fang You, 2025, No journal)
- Driving across Markets: An Analysis of a Human-Machine Interface in Different International Contexts(Denise Sogemeier, Yannick Forster, Frederik Naujoks, J. Krems, Andreas Keinath, 2024, Inf.)
- UX Principles for Modern UI/UX Design and Their Measurement: A Framework for Digital Product Excellence(Pratyush Tewari, 2025, Journal of Computer Science and Technology Studies)
- Eye Tracking Study on Visual Search Performance of Automotive Human-Machine Interface for Elderly Users(Songman Li, Song Hao, 2024, IEEE Access)
自适应策略、情境感知与个性化架构
探讨根据驾驶任务、用户偏好、环境上下文动态调整界面内容的逻辑,涉及上下文感知推荐(Carsi系列)、分布式系统架构以及针对全球市场趋势的个性化配置方案。
- Conceptual Design of Driver-Adaptive Human-Machine Interface for Digital Cockpit(Jin-Kyu Choi, Young-Jin Kwon, J. Jeon, Kyong-ho Kim, Hyun-Kyun Choi, Byungtae Jang, 2018, 2018 International Conference on Information and Communication Technology Convergence (ICTC))
- An Adaptive and Personalized In-Vehicle Human-Machine-Interface for an Improved User Experience(Guillermo Reyes, 2020, Companion Proceedings of the 25th International Conference on Intelligent User Interfaces)
- AI-Driven Personalization to Support Human-AI Collaboration(Cristina Conati, 2024, Companion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems)
- Design and Performance Optimization of an Intelligent Cockpit System Based on OpenHarmony(Yan Ren, F. Huang, Wenjie Jiang, Ruier Luo, 2025, Frontiers in Computing and Intelligent Systems)
- Using Software Frameworks to Develop a Fully Functional Digital Cockpit(Adam Konopa, S. Shcherbakov, 2024, ATZelectronics worldwide)
- Driver-adaptive vehicle interaction system for the advanced digital cockpit(Jin-Kyu Choi, Kyong-ho Kim, Dohyun Kim, Hyunkyun Choi, Byungtae Jang, 2018, 2018 20th International Conference on Advanced Communication Technology (ICACT))
- Context-Aware Access Control in Novel Automotive HMI Systems(Simon Gansel, Stephan Schnitzer, Ahmad Gilbeau-Hammoud, V. Friesen, Frank Dürr, K. Rothermel, Christian Maihöfer, Ulrich Krämer, 2015, No journal)
- CARSI II: A Context-Driven Intelligent User Interface(Marco Wiedner, Sreerag V Naveenachandran, Philipp Hallgarten, Satiyabooshan Murugaboopathy, E. Frazzoli, 2024, Adjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- CARSI 3.0: A Context-Driven Intelligent User Interface(Marco Wiedner, Adrian Fatol, Andri Furrer, Leon Eisemann, E. Frazzoli, 2025, Adjunct Proceedings of the 17th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Conception, Development and First Evaluation of a Context-Adaptive User Interface for Commercial Vehicles(Lasse Schölkopf, Maria-Magdalena Wolf, Veronika Hutmann, Frank Diermeyer, 2021, 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Adaptive attention-based human machine interface system for teleoperation of industrial vehicle(Jouh Yeong Chew, M. Kawamoto, T. Okuma, E. Yoshida, Norihiko Kato, 2021, Scientific Reports)
- myCOMAND: Automotive HMI framework for personalization of web-based content collections(P. Fischer, A. Nürnberger, 2010, 2010 IEEE International Conference on Systems, Man and Cybernetics)
- myCOMAND Automotive User Interface: Personalized Interaction with Multimedia Content Based on Fuzzy Preference Modeling(P. Fischer, A. Nürnberger, 2010, No journal)
- Influence of Adaptive Human-Machine Interface on Electric-Vehicle Range-Anxiety Mitigation(Antonyo Musabini, Kevin Nguyen, Romain Rouyer, Yannis Lilis, 2020, Multimodal Technol. Interact.)
- How can Automotive User Interfaces Represent Kinetic Energy as a Resource?: An Interview Study with Hybrid Electric Vehicle Eco-Drivers(2018, Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- An Open Road Evaluation of a Self-Driving Vehicle Human–Machine Interface Designed for Visually Impaired Users(Julian Brinkley, Brianna B. Posadas, Imani N. Sherman, S. Daily, J. Gilbert, 2019, International Journal of Human–Computer Interaction)
- Fluid: Flexible User Interface Distribution for Ubiquitous Multi-Device Interaction(Sangeun Oh, Ahyeon Kim, Sunjae Lee, Kilho Lee, Dae R. Jeong, I. Shin, Steven Y. Ko, 2020, GetMobile: Mobile Computing and Communications)
- Beyond Car Human-Machine Interface (HMI): Mapping Six Intelligent Modes into Future Cockpit Scenarios(Shu-Yu Cui, Donghan Hou, Jiayue Li, Yuwei Liu, Zi Wang, Jiayu Zheng, X. Dou, Z. Feng, Yuxuan Gu, Minglan Li, S. Ni, Ziwei Ran, Bojuan Ren, Jingyi Sun, Shenming Wang, Xinyan Xiong, Guanzhuo Zhang, Wangjun Li, Jingpeng Jia, Xin Xin, 2023, No journal)
- The Circular Digital Cockpit: Towards an actionable framework for life cycle circularity assessment and decision(B. Yannou, Ghada Bouillass, M. Saidani, M. Jankovic, 2024, Procedia CIRP)
系统底层工程、架构优化与自动化测试
关注HMI的工程落地与测试效率,涵盖分布式OS(OpenHarmony)、Hypervisor虚拟化渲染、基于模型的设计工具、插件化导航架构以及利用机器人进行的触控与语音自动化测试。
- Rapid Prototyping of Automotive HMI Systems Utilizing Vector CANape and Mathworks Simulink(Roger Arnold Trombley, N. Rolfes, John Shutko, 2012, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- A Demonstration of AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies(Pascal Jansen, Julian Britten, Alexander Häusele, Thilo Segschneider, Mark Colley, Enrico Rukzio, 2023, Adjunct Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies(Pascal Jansen, Julian Britten, Alexander Häusele, Thilo Segschneider, Mark Colley, E. Rukzio, 2023, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems)
- An Intelligent Cockpit System HMI Engine Based on COMO(S. Liu, Xilong Pei, Jiali Wang, Jing-song Huang, Jianmin Wang, Ning Wang, 2022, Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering)
- Intelligent cockpit touch testing system based on multimodal large models: design and validation(Xiangtian Kuang, Fan Yang, Dongpo Xie, Liang Tang, Jing Peng, Ying Zhang, Meiling Wang, Xiaolong Li, Cansong Gu, Xiaojun Xia, Lei Ye, Lichang Fan, 2025, No journal)
- A Novel Plugin-Based Navigation Architecture for Multi-Brand, Multi-Screen Automotive Systems(Ronak Indrasinh Kosamia, 2025, International Scientific Journal of Engineering and Management)
- An intelligent cockpit voice testing system based on artificial intelligence technology(Han Zhou, Junxia Ma, 2025, Proceedings of the International Conference on Intelligent Control and Automation Applications)
- Smart Cockpit Scenario Service Data Encryption and Security Protection System Based on Voice Interaction(Guangxiu Zhang, Xiaojie Wang, Zhenyu Nie, Jie Yang, 2025, 2025 4th International Conference on Artificial Intelligence, Human-Computer Interaction and Robotics (AIHCIR))
- Driving Simulator Validation for In-Vehicle Human Machine Interface Assessment(Thomas McWilliams, Bruce Mehler, B. Seppelt, B. Reimer, 2019, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- Vehicle Human-Machine Interaction Interface Evaluation Method Based on Eye Movement and Finger Tracking Technology(Mengjin Zeng, Gang Guo, Qiuyang Tang, 2019, No journal)
- Development of an automotive user interface design knowledge system(Hao Tan, Yi Zhu, Jianghong Zhao, 2012, No journal)
- Implementation of model-based development tool and run-time engine for digital cockpit system(Changrak Yoon, Byoung-Jun Park, Dohyun Kim, 2017, 2017 International Conference on Information and Communication Technology Convergence (ICTC))
- EHMI: A Complexity Assessment Method for Automotive Intelligent Cockpit Human-Computer Interaction Interfaces: An Example from the Instrument Cluster(Zhenyu Wang, Fuchang Liu, Zenan Lu, Fusheng Jia, Jianmin Wang, 2025, No journal)
- Proposal for graphics sharing in a mixed criticality automotive digital cockpit(Milan Z. Manic, Milica Z. Ponos, M. Bjelica, D. Samardzija, 2020, 2020 IEEE International Conference on Consumer Electronics (ICCE))
- Kinematic modeling and simulation of a dual-arm robot for intelligent cockpit testing(Fan Yang, Xiangtian Kuang, Dongpo Xie, Liang Tang, Jing Peng, Ying Zhang, Meiling Wang, Xiaolong Li, Cansong Gu, Xiaojun Xia, Lei Ye, Lichang Fan, 2025, No journal)
- Research on Testing Methods for Intelligent In-Vehicle Audio System(Yueming Zhu, Zhe Tian, Xuwangda Ma, Qipeng Zhang, 2025, Scientific Journal of Intelligent Systems Research)
- Intelligent Fabric Enabled 6G Semantic Communication System for In-Cabin Scenarios(Yuan Tang, Ning Zhou, Qiao Yu, Di Wu, Chong Hou, Guangming Tao, Min Chen, 2023, IEEE Transactions on Intelligent Transportation Systems)
- Efficient compositing strategies for automotive HMI systems(Simon Gansel, Stephan Schnitzer, Riccardo Cecolin, Frank Dürr, K. Rothermel, Christian Maihöfer, 2015, 10th IEEE International Symposium on Industrial Embedded Systems (SIES))
- A Roadmap for intelligent HVAC control in Vehicle Cabin(Mohamed Alkhadashi, A. Shaout, 2021, 2021 22nd International Arab Conference on Information Technology (ACIT))
- Research on the Associative Interactive Feedback Design for the Multi-chart Large Screen Interface(Sibei Yu, Jing Zhang, Zelei Pan, Zixin Yang, Mingle Chen, 2025, No journal)
- In-vehicle Human Machine Interface: An Approach to Enhance Eco-Driving Behaviors(P. Lena, S. Mirri, Catia Prandi, P. Salomoni, Giovanni Delnevo, 2017, Proceedings of the 2017 ACM Workshop on Interacting with Smart Objects)
- An access control concept for novel automotive HMI systems(Simon Gansel, Stephan Schnitzer, Ahmad Gilbeau-Hammoud, V. Friesen, Frank Dürr, K. Rothermel, Christian Maihöfer, 2014, No journal)
- ICEBOAT: An Interactive User Behavior Analysis Tool for Automotive User Interfaces(Patrick Ebel, Kim Julian Gülle, Christoph Lingenfelder, Andreas Vogelsang, 2022, Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology)
视觉特征、界面审美与物理显示技术研究
细化研究HMI界面的视觉呈现细节,包括字体易读性(中文/英文)、图标比例、屏幕尺寸、AR-HUD界面布局、显示响应延迟以及新型电子材料在内饰中的审美应用。
- Design of Touch Control Display Interface Based on Synoptic Page in Civil Aircraft Cockpit(Ruijie Fan, Chunling Zhao, Xianchao Ma, Hongyu Zhu, 2021, 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC))
- A 50.7-dB-DR Finger-Resistance Extracting Multi-Touch Sensor IC for Soft Classification of Fingers Contacted on 6.7-in Capacitive Touch Screen Panel(Tae-Gyun Song, Dong-Kyu Kim, Jeong-Hyun Cho, Ji-Hun Lee, Hyunsik Kim, 2021, IEEE Journal of Solid-State Circuits)
- Improving the Performance of In-vehicle Interaction: the Role of Wrist Support(Junfu Huang, Qiuyang Tang, Qiang Zhang, Lin Li, Qiang He, 2022, 2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI))
- Sharp Force Touch for On-Screen User Interface in LCD and Foldable OLED Display Application(Takuma Yamamoto, T. Maruyama, Kazutoshi Kida, Shinji Yamagishi, Biregeya Jean de Dieu Mugiraneza, Yasuhiro Sugita, H. Fukushima, Mikihiro Noma, 2021, Proceedings of the International Display Workshops)
- Predictive Touch: A Novel HMI Technology for Intelligent Displays in Automotive(B. I. Ahmad, S. Godsill, Patrick Langdon, L. Skrypchuk, 2018, Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- An Intelligent Cockpit Tailored Carpet for Human‐Vehicle Interaction Enhancement and Driving Intention Recognition(Xiao Lu, Yifei Gong, Haodong Zhang, Haiqiu Tan, Qiwei Zheng, Liangchao Xu, Tao Jin, Long Li, Quan Zhang, Tao Yue, Shaorong Xie, 2024, Advanced Functional Materials)
- Assessing the impact of typeface design in a text-rich automotive user interface(B. Reimer, Bruce Mehler, Jonathan Dobres, J. Coughlin, Steve Matteson, David Gould, Nadine Chahine, Vladimir Levantovsky, 2014, Ergonomics)
- Optimizing In-Cabin Communication and User Experience in Future Vehicles through Dynamic Expressive Interior Lighting(Mohamed Abd El Ghani, Nadia Berthouze, Aneesha Singh, 2024, Adjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
- Effect of Display Response Time on Sense of Agency and Brain Activity During Human–machine Interface Device Operation(Michihisa Yamamoto, T. Tsumugiwa, R. Yokogawa, 2024, IECON 2024 - 50th Annual Conference of the IEEE Industrial Electronics Society)
- Ergonomic design and evaluation of human-machine interaction on the display and control interface of full ocean-deep manned submersible cockpit(Yang-yang Li, Cong Ye, Lu Shi, 2024, No journal)
- 7‐2: Integrated Cockpit Concept: Sunrise, A New Horizon of Integration(Impidjati, Eric Ping, 2025, SID Symposium Digest of Technical Papers)
- A Haptic Interface Concept for Highly Automated Vehicle Human–Machine Interfaces Based on a Mathematical Design Method(Xin Meng, Yu Zhao, Jun Lu, Ting Ye, Xin-Yu Ma, 2025, International Journal of Human–Computer Interaction)
- “Soft” Controls for Hard Displays: Still a Challenge(A. Degani, E. Palmer, Kristin G. Bauersfeld, 1992, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- A Study on the Systemization of UX Scenarios for the Application of Digital Healthcare Technology in Vehicles(Sara Hong, Min Chan Kim, Taehun Kim, Myungbin Choi, Soojin O. Peck, Ji Hyun Yang, 2025, Transaction of the Korean Society of Automotive Engineers)
- Refining UI/UX with Minimalist Design and AI: Towards Sustainable and Efficient Digital Experiences(Jacqueline Audrey Iman, Alicia Felisha, Michael Kimeison, Eric Hermawan, R. Y. Rumagit, Hady Pranoto, 2025, Procedia Computer Science)
- What type of intelligent cockpit can help more in traffic accident scenarios? Considering the effects of gender and anthropomorphism in voice interface, and their interaction with text length(Jinjin Mo, Jingjing Chen, Tianji Shi, Zhe Chen, 2025, Transportation Research Part F: Traffic Psychology and Behaviour)
- Automotive User Interfaces in the Age of Automation (Dagstuhl Seminar 16262)(A. Riener, Susanne CJ Boll, A. Kun, 2016, Dagstuhl Reports)
- Impacts of Touch Screen Size, User Interface Design, and Subtask Boundaries on In-Car Task's Visual Demand and Driver Distraction(Hilkka Grahn, T. Kujala, 2020, Int. J. Hum. Comput. Stud.)
- A Visual Study of In-vehicle Human-machine Interaction Interface(Bingbing Xu, Zihao Li, Zhendong Wu, Xiaoqun Ai, 2022, 2022 3rd International Conference on Intelligent Design (ICID))
- A study of simplified Chinese typeface design applied to automotive user interface(Xiaosong Qian, Yuanbo Sun, 2019, Proceedings of the Seventh International Symposium of Chinese CHI)
- 7‐1: Invited Paper: Smart Interior for Intelligent Cockpit(Shanrong Zhang, Xiongping Li, Fan Tian, Zhiyuan Zhang, 2025, SID Symposium Digest of Technical Papers)
- HUMAN MACHINE INTERFACE DESIGN OF AR-HUD AUTONOMOUS VEHICLE BASED ON SCENARIO MODEL(2025, International Journal of Mechatronics and Applied Mechanics)
- Organic electronics application overview from automotive HMI to X-ray detectors(R. Gwoziecki, J. Verilhac, A. Latour, A. Revaux, C. Serbutoviez, A. Martinent, 2016, 2016 6th Electronic System-Integration Technology Conference (ESTC))
- Exploring the Layout and Interaction Requirements of Multi-screen Interfaces in In-Vehicle Infotainment Systems for Electric Vehicles Using IPA and Kano Models(Shih-Chieh Chen, Tse-An Chen, 2025, No journal)
- Thumbnail-based interaction method for interactive video in multi-screen environment(Ui-Nyoung Yoon, Seunghyun Ko, Kyeong-Jin Oh, Geun-Sik Jo, 2016, 2016 IEEE International Conference on Consumer Electronics (ICCE))
- Digital Transformation in Automotive: Color Design of Cockpit Alerts for Effective and User-Friendly Driver Communication(Anna Lewandowska, Agnieszka Olejnik-Krugly, Kamil Bortko, 2025, International Conference on Information Systems Development)
- Automotive Interior Design in the Age of Electric Vehicles: User Interface Expectations, Perceptions and Preferences(Rafael Gomez, Levi Swann, Peter Florentzos, Alex Singleton, 2024, Adjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications)
最终分组结果全面覆盖了智能座舱HMI设计的技术栈与方法论。从底层的软硬件架构、传感器技术、自动化测试工具,到中层的多模态融合交互、视觉美学优化与UX量化评估模型,再到顶层的情感共识、个性化自适应策略以及自动驾驶时代的人机共驾安全。研究呈现出从单纯的“人机界面”向“第三空间、共情伴侣、安全防线”深度演进的趋势。
总计168篇相关文献
No abstract available
ICS (Intelligent Cockpit System) is a Human-Machine Interface (HMI) technology that integrates In-Vehicle Infotainment (IVI), Head Up Display (HUD), and Navigation (NAVI). The HMI technology stack in ICS involves engineering human-machine interaction creativity, component-based graphics systems and vehicle-level hardware system, etc. The HMI engine we developed is a middleware that implements human-machine interaction computing in in-vehicle electronic equipment, and the engine is also a software stack for graphics computing, scheduling management of computing devices in the cockpit and software runtime functions, it operates the display hardware through OpenGL. This paper introduces a COMO-based ICS HMI technology with functional safety SOA architecture. With the support of COMO RPC, devices are abstracted as services and integrated together. Under the premise of ensuring functional safety, it has a variety of ICS-oriented 2D, 3D controls with running state and design state, suitable for creative people and automotive engineers to work together.
No abstract available
No abstract available
It has been discovered that the design of multimodal HMI interaction, which is based on emotion regulation, assists drivers in receiving information and improving their driving state. This design also provides them with multidimensional emotional experiences. The aim is to investigate the field of multimodal emotion recognition and the implementation of intelligent cockpit HMI interaction design. Commencing with the discrete emotion model, we examine, analyze, and formulate the design and strategic principles for multimodal interaction models in emotion regulation within driving scenarios, utilizing the information fusion hierarchy in multimodal emotion recognition. Furthermore, we propose interaction design solutions through additional research and demonstration of literature cases and user data. We explore the concepts and trends of multimodal interaction design for future human-computer interfaces, and strive to provide a more efficient, humanized, emotional, and immersive experience design research path and theoretical reference.
No abstract available
Gesture-based human–machine interaction (HMI) has attracted growing interest in intelligent cockpit systems due to its potential for contactless and intuitive control. However, existing methods based on traditional image processing or wearable sensors lack the flexibility and naturalness required in complex vehicle environments. Although deep learning approaches offer superior representation capabilities, they often suffer from poor real-time performance, limited recognition accuracy, and a lack of validation under real-world driving conditions. To overcome these limitations, a lightweight spatiotemporal fusion network (LSFNet) for vehicle-mounted gesture recognition system is proposed. The net integrates a dynamic spatiotemporal fusion (DSTF) network that adaptively aligns interframe features via learnable channelwise weights, enables the model to be more sensitive to the temporal misalignment features caused by changes in gesture speed. A dual-branch spatiotemporal attention module further enhances recognition by capturing both multiscale spatial features and dynamic gesture trajectories. The complete system is deployed on a resource-constrained Jetson Orin Nano platform and accelerated using TensorRT. Field experiments conducted under real-world driving conditions demonstrate an average recognition accuracy exceeding 92% for commonly used gestures, with inference latency remaining below 30 ms, thereby confirming the practicality of the proposed approach for real-time in-vehicle gesture recognition.
To enhance affective experience and customer satisfaction in the intelligent cockpit of new energy vehicle (NEV-IC), this article proposes a novel method that combines the visual sequence data of eye movements with the sentiment prediction using improved Long Short-Term Memory (LSTM). Specifically, we used eye-tracking technology to capture users ’ visual sequence of design morphology for NEV-IC. We then adopted entropy-TOPSIS to compute the ranking of morphological components based on experts ’ opinions, establishing the coupling between users ’ visual perception and experts ’ opinion to obtain the key morphological dataset of NEV-IC based on user visual sequence. To tackle the shortcomings of LSTM, meanwhile, we employed the sparrow search algorithm (SSA) to optimize the hyperparameters of the LSTM model. Moreover, an attention mechanism has been introduced to address LSTM ’ s difficulty in preserving key information when processing the sequential data, enabling a stronger focus on critical sequential features within the user ’ s visual path. To assess the efficacy of the proposed SSA-LSTM-Attention model, a dataset incorporating user emotional imagery was constructed, within the research framework of Kansei engineering (KE). This dataset, in conjunction with the morphological dataset of visual sequential features, was applied to our model. The study results indicated that compared to traditional machine learning models like BP neural network (BPNN), support vector regression (SVR), and LSTM, our model performed better in capturing the nonlinear relationship between user sentiment and design features. Additionally, it exhibited higher predictive accuracy, better generalization ability and stronger robustness.
Abstract Objectives The increasing demand for family travel highlights the importance of intelligent cockpit interaction design in this context. This study aims to meet the diverse needs of family users in non-linear driving scenarios through interactive design of intelligent cockpits, enhancing the situational awareness and collaborative performance of drivers and passengers. Methods Scenario research and user research methods were employed to summarize typical nonlinear driving scenarios and analyze the needs of family-oriented users for intelligent cockpit interaction design. A design framework and strategies were proposed to guide interaction design schemes. Usability testing was conducted to validate the usability and user acceptance of the design scheme, followed by iterative optimization. Results Experimental verification demonstrated that the design significantly improved the situational awareness and human-machine collaboration performance of drivers and passengers in typical nonlinear driving scenarios. This led to enhanced driving safety. Conclusions The study provides practical guidance and theoretical support for intelligent cockpit interaction design, ensuring better driving safety and a more user-friendly experience in family travel scenarios. Highlights The aim of this study is to meet the diverse needs of family users in non-linear driving scenarios through interactive design of intelligent cockpits, and to enhance the situational awareness and collaborative performance of drivers and passengers. This study aims to address the diverse needs of family users regarding intelligent cockpits in nonlinear driving scenarios. It proposes a design framework to guide the interaction design of intelligent cockpits. This study conducted an experimental to demonstrate that the HMI design can enhance the situational awareness and human-machine collaboration performance of drivers and passengers in typical nonlinear driving scenarios.
Aiming at the key issues of traditional in-vehicle systems, such as weak distributed collaboration capabilities, high interactive response latency, and insufficient localization adaptation, this study proposes and implements an intelligent cockpit system based on the OpenHarmony operating system. The system adopts a three-layer distributed architecture of "Southbound Hardware Layer - Cloud Service Layer - Northbound Application Layer". It uses the Puzhong Hi3861 development board to build the hardware control layer, integrates various types of sensors and actuators, develops the northbound interactive interface based on the ArkUI framework, and introduces an AI large model to optimize the multimodal interaction logic. To realize the quantitative evaluation of system performance, communication latency models, data acquisition accuracy models, and interactive response efficiency models are established, and the effectiveness of each model is verified through experiments. The test results show that the system has a communication latency of ≤50ms, a data loss rate as low as 0.03%, and a voice command recognition accuracy of over 95%. Compared with traditional Linux/Android in-vehicle systems, the improvement of its core performance indicators exceeds 40%. This study provides a technical solution with both theoretical value and engineering significance for the localized research and development of in-vehicle intelligent systems.
With the rapid development of intelligent cockpit technology, voice interaction has become the core entry point of human-vehicle interaction. This paper proposes an intelligent cockpit voice test system based on artificial intelligence technology, which deeply integrates AI technologies such as automatic speech recognition (ASR), natural language processing (NLP), speech synthesis (TTS), and big data analysis. It builds a full-process test platform that integrates intelligent use case generation, multimodal scenario simulation, automated execution and in-depth analysis. The system significantly improves test efficiency and coverage, reduces labor costs, and provides data-driven decision support for the quality optimization of voice interaction systems. It is a key infrastructure for ensuring the quality of voice interaction in intelligent cockpits.
The increasing complexity of multimodal interaction functions in intelligent vehicle cockpits has raised the demand for accurate, consistent, and automated testing solutions. Manual testing methods often suffer from low precision and poor repeatability, limiting their applicability in standardized evaluation scenarios. This paper presents a dual-arm robotic inspection system with seven degrees of freedom per arm, designed for intelligent cockpit testing tasks. The robot features a modular mechanical structure and is equipped with a high-performance motion control scheme capable of executing complex operations in confined spaces. A forward kinematic model of the robot is established using the Denavit–Hartenberg (D-H) method, while inverse kinematics and trajectory planning are solved using the optimization-based CuRobo solver. A virtual cockpit environment is reconstructed in NVIDIA Isaac Sim, where realistic collision objects are introduced for motion planning under constraint. Simulation results indicate that the proposed system achieves sub-millimeter end-effector accuracy, smooth joint trajectories, and effective obstacle avoidance. The maximum positional error is below 0.37 mm in all tested directions. The proposed approach demonstrates strong potential for high-precision, task-flexible automated testing of intelligent cockpits. It provides both theoretical and practical foundations for the development of standardized robotic evaluation platforms in the automotive industry.
Fast-evolving cockpit user interfaces (UIs) challenge conventional touch-screen test procedures, which rely on handcrafted templates or fully-supervised detectors and therefore require extensive re-annotation whenever layouts change. This paper presents an annotation-efficient, “eyes-on-hand” testing platform that fuses (i) an RGB-D camera, (ii) a retrieval-augmented vision-language model (Qwen-VL + RAG), and (iii) a 7-degree-of-freedom robotic manipulator equipped with fingertip haptic feedback. The system accepts natural-language instructions, grounds the requested control element through zero-shot multimodal reasoning, projects the 2-D bounding box to 3-D space, and executes a closed-loop touch while monitoring contact force. A 90-cycle benchmark covering static pages, dynamic transitions, and densely packed widgets on three production vehicles shows that the proposed method attains 94.6 % precision, 97.9 % recall, and a 96.7 % end-to-end success rate at an average latency of 7.1 s—well within the 10 s service-level agreement for batch quality-assurance lines. Ablation experiments demonstrate that retrieval augmentation and haptic feedback are complementary: removing either module lowers the success rate by 8~18%, while their joint removal drives it below 70 %. Compared with five representative baselines—template matching, YOLOv5 detection, Edge + OCR, SAM-OCR hybrid, and human quality-assurance staff—the proposed pipeline offers the best recall–cost trade-off, requiring only ≈ 500 ontology entries and zero GPU fine-tuning, whereas YOLOv5 demands ≥ 6 000 new labels and four GPU-hours per UI revision. Future work will address curved or deformable UI elements and further reduce cycle time through model distillation and motion-planning warm starts. Overall, the framework provides a scalable, label-efficient solution for automated, cross-model evaluation of next-generation intelligent-cockpit touch interactions.
No abstract available
With the development of intelligent connected vehicles, intelligent cabins capable of multimodal interaction have become a new research direction for improving driving experience and safety. However, the current cabin evaluation system is still not perfect, and the evaluation criteria vary greatly, which limits its further optimization. This study proposes a design evaluation method for multimodal interaction intelligent cabins based on ergonomics, using the analytic hierarchy process to construct a comprehensive evaluation system covering visual, tactile, and auditory interaction modes. Through the instance evaluation of the intelligent cabin of an intelligent passenger car and comparison with the existing evaluation system, the effectiveness of this method is verified, providing an important basis for the design optimization of intelligent cabins.
With the rapid advancement of technology and the continuous improvement of living standards, automotive interiors have undergone significant transformations to meet evolving consumer demands for both functionality and aesthetics. In the early days, there were few displays in vehicles, and their screen sizes were limited. However, the recent trend toward intelligent automotive cockpits has led to an increase in both the size and quantity of displays. While this enhances the technological appeal, it has also resulted in complex and cluttered interfaces that can distract drivers and compromise safety. To address these challenges, smart interior technology has emerged, offering a harmonious blend of intelligent display capabilities, driving safety, and aesthetic design. Our team has been at the forefront of innovation in smart interior technology, striving to overcome technical challenges and deliver cutting-edge solutions. We have recently launched an upgraded smart interior system for intelligent cockpits, featuring improved energy efficiency, enhanced decorative aesthetics, superior display quality, and reduced module thickness. This product aligns with modern development principles of sustainability, health, and beauty.
No abstract available
No abstract available
No abstract available
With the rapid development of intelligent cockpit, the frontier research of human-vehicle interaction issues has gradually become the focus of the automobile industry. This paper introduces the development history of intelligent cockpit and its design principles and outlook also be proposed. Given the crucial role of multimodal interaction on the intelligent cockpit usability, its definition, common forms, challenges and solutions are introduced and discussed. Four conclusions are drawn from this paper. First of all, the intelligent cockpit design is combined with artificial intelligence technology to improve the convenience and safety of driving. Secondly, the multi-modal interaction provides drivers with a more natural and intuitive interaction mode, and increases the fun of driving. Moreover, the emotional design makes the car not only a means of transportation, but also a kind of emotional sustenance, which meets the emotional needs of users. Finally, the personalized experience allows users to customize the various functions of the car according to their own preferences and habits, which improves the driving comfort. These cutting-edge research not only promotes the technological progress of the automotive industry, but also provides new ideas and directions for the future automotive design.
The development of Artificial Intelligence (AI) Large Models has a great impact on the application development of automotive Intelligent cockpit. The fusion development of Intelligent Cockpit and Large Models has become a new growth point of user experience in the industry, which also creates problems for related scholars, practitioners and users in terms of their understanding and evaluation of the user experience and the capability characteristics of the Intelligent Cockpit Large Models (ICLM). This paper aims to analyse the current situation of Intelligent cockpit, large model and AI Agent, to reveal the key of application research focuses on the integration of Intelligent Cockpit and Large Models, and to put forward a necessary limitation for the subsequent development of an evaluation system for the capability of automotive ICLM and user experience. The evaluation system, P-CAFE, proposed in this paper mainly proposes five dimensions of perception, cognition, action, feedback and evolution as the first-level indicators from the domains of cognitive architecture, user experience, and capability characteristics of large models, and many second-level indicators to satisfy the current status of the application and research focuses are selected. After expert evaluation, the weights of the indicators were determined, and the indicator system of P-CAFE was established. Finally, a complete evaluation method was constructed based on Fuzzy Hierarchical Analysis. It will lay a solid foundation for the application and evaluation of the automotive ICLM, and provide a reference for the development and improvement of the future ICLM.
In recent years, with the rapid development of Internet communication digital technology represented by artificial intelligence, Internet of Things, big data, the human-computer interaction of automobiles has also been further developed, showing a new look. As the main part of automotive interaction design, the digital, intelligent and Internet-oriented characteristics of intelligent cockpit are becoming increasingly obvious, especially in recent years, the intelligent cockpit design of new energy vehicles has broken the traditional interaction mode of automobile cockpit in the era of industrial machinery, and more intelligent and immersive interaction design is adopted. And the cockpit of the car not only has a single function for driving, but also provides a space for leisure and entertainment for the driver. Key technologies and design cases of cockpits and its interaction design were sorted out and summarized, and then the problems that cockpits interaction design faces and its future development were analyzed in this paper.
An intelligent cockpit user experience evaluation model is introduced, utilizing a three-level indicator system based on the fuzzy analytic hierarchy process. The first-level indicators encompass hardware, software functionality, perceptual experience, and emotional experience. Hardware and software, being fundamental cockpit's features, are pivotal to the overall user experience. Based on this, multimodal perceptual experience enhances user awareness and emotional experience reflects the user's psychological acceptance of the entire interaction process and outcomes, representing the culmination of the user's experience within the intelligent cockpits. Experimental results show that, when faced with numerous evaluation indicators, the proposed method outperforms traditional approaches, offering greater efficiency and reducing human subjectivity and uncertainty, thereby facilitating user experience evaluation. The studies presented can guide the user experience design for automobile intelligent cockpits.
New energy vehicles pay attention to the development of intelligent driving technology, which requires high human-computer interaction technology. Therefore, this paper studies the emotion recognition of intelligent cockpit of new energy vehicles, so as to promote the emotion regulation of intelligent driving on drivers and assist drivers to complete driving behavior more efficiently and safely. In order to ensure that the angry driving data can be organically combined with the overall architecture of intelligent vehicles, the motion preview model based on integrated direction and speed control can make real-time decisions according to the vehicle motion state and current traffic information, and feedback the decision information to the vehicle control module. Based on this model, the driver's anger characteristics are considered. In addition, this paper proposes a multimodal driver emotion recognition model MDERNet based on facial expression and driving behavior. It filters and highlights the data of driving behavior modes through the temporal attention obtained from facial expression modes, so as to realize information fusion among multiple modes at the input information level. Finally, through the results of experimental research and analysis, we can see that the driver's personal emotion recognition proposed in this paper has a good effect.
Extended equipment refers to devices based on user needs that are not constrained by the space, software, or hardware of the automotive intelligent cockpit. It possesses independent capabilities outside the cabin and, when interconnected with the automotive intelligent cockpit, enhance cockpit functionalities and enrich the user experience. Due to their strong independent out-of-cockpit functions, users frequently connect the cockpit with mainstream extended equipment (smartphones, tablets, etc.) Improving the user experience of the interconnection process requires targeted evaluation methods first. To investigate the evaluation methods for the usability of interconnections between Automotive Intelligent cockpit and extended equipment, 35 participants were invited to complete usability evaluation experiments. This paper combines touch and eye movement behavior representation for usability evaluation. Four methods-correlation analysis (CA), intrinsic feature importance (IFI), recursive feature elimination (RFE), and the SHAP-were used for feature selection. In combination with three classic models, a usability evaluation model for the interconnection between Automotive Intelligent cockpit and Extended equipment was established. The research results indicate that the features selected through CA-namely the Distance Moved Number of Samples, Not Moving Frequency, Not Moving Cumulative Duration, Task Duration, Task 1 (car machine) Number of Gazes, and Task 1 Total Glance Time-achieved the highest performance under the XGBoost model with an R2 of 0.742. This study established a mapping relationship between objective indicators and subjective usability, providing a theoretical foundation for enhancing the proactive perception capabilities of Automotive Intelligent cockpit. It has made an indelible contribution to further improving the intelligence level of Automotive Intelligent cockpit.
With the rapid development of technologies such as artificial intelligence, big data, and new processes, as well as the increasing demand for quality of life, new technological means are being applied in the field of automobile manufacturing to meet the needs of personalization. The industry is paying more and more attention to the research and development of intelligent cockpits in automobiles. In order to better understand the intelligent cockpit in automobiles and provide a reference for the feasibility study of the industry in the field of intelligent cockpit, this article introduces the concept and development history of intelligent cockpit, explains the importance of design evaluation, and provides an explanation for the design concept of intelligent cockpit. Necessary analysis of the advantages and disadvantages of using intelligent cockpit in automobiles and future development trends are also included. The evaluation of intelligent cockpit is introduced and analyzed from the perspectives of safety, comfort, and economy. This article focuses on explaining the design concept of the intelligent cockpit from three aspects: structural design, functional design, and safety design. While considering the advantages and disadvantages of using intelligent cabins in automobiles, as well as the improvement direction of evaluation and design, the future development trend of intelligent cabins is prospected.
In the realm of intelligent cockpits, the predominant emphasis on enhancing driving safety and comfort through human‐vehicle interaction and driving behavior monitoring has conventionally centered on the upper body of the driver. Regrettably, the wealth of information inherent in the lower half of the driver's body, particularly the feet, tends to be systematically underestimated by researchers. A specialized intelligent carpet tailored for automotive cockpits that integrates sponge‐based triboelectric sensor to monitor driver's foot movement is proposed. The intelligent carpet contains two areas, namely the human‐vehicle interaction sensing area and the driving intention monitoring area. With the human‐vehicle interaction sensing area, the driver can interact with the vehicle through the left foot, resulting in reducing the occupation of visual resources and hand load. Furthermore, the driving intention monitoring area supported by the right foot can effectively identify driver's intentions and driver's behaviors, which are essential to trajectory prediction, optimal path planning, and driving safety. It is demonstrated that an intelligent carpet is potentially useful for human‐vehicle interaction, and transportation optimization and accident prevention.
No abstract available
Advances in technologies, such as intelligent connected vehicles and the metaverse are driving the rapid development of automotive intelligent cockpits. From the perspective of the cyber–physical–social system (CPSS), this study proposed the intelligent cockpit composition framework which includes three layers of perception, cognition and decision, and interaction. Meanwhile, we also describe the relationship between the intelligent cockpit framework and the outside environment. The framework can dynamically perceive and understand humans, and provide feedback on the understanding results, which is beneficial to provide a safe, efficient, and enjoyable experience for humans in the intelligent cockpit. In the cognition and decision layers of the proposed framework, we design a case study of active empathetic auditory regulation of driver anger, focusing on improving road traffic safety. We conducted an in-depth interview experiment and designed two auditory regulation materials of active empathy speech and text-to-speech (TTS) speech. Next, 30 participants were recruited, and they completed a total of 240 anger-regulated driving experiments in the straight and obstacle avoidance scenarios. Finally, we quantitatively analyzed and compared the participants’ subjective feelings, physiological changes, driving behaviors, and driving risks, as well as validated the driver anger regulation quality of AES and TTS. The proposed research methods results are beneficial to the design of future intelligent cockpit emotion regulation systems, toward a better intelligent cockpit.
Automotive user interface (AUI) evaluation becomes increasingly complex due to novel interaction modalities, driving automation, heterogeneous data, and dynamic environmental contexts. Immersive analytics may enable efficient explorations of the resulting multilayered interplay between humans, vehicles, and the environment. However, no such tool exists for the automotive domain. With AutoVis, we address this gap by combining a non-immersive desktop with a virtual reality view enabling mixed-immersive analysis of AUIs. We identify design requirements based on an analysis of AUI research and domain expert interviews (N=5). AutoVis supports analyzing passenger behavior, physiology, spatial interaction, and events in a replicated study environment using avatars, trajectories, and heatmaps. We apply context portals and driving-path events as automotive-specific visualizations. To validate AutoVis against real-world analysis tasks, we implemented a prototype, conducted heuristic walkthroughs using authentic data from a case study and public datasets, and leveraged a real vehicle in the analysis process.
In this demonstration, we present novel interaction modalities and use cases for AutoVis, a tool for the mixed-immersive analysis of automotive user interface (AUI) interaction studies. AutoVis uniquely enables exploration of AUI studies’ multilayered spatio-temporal interplay between humans, vehicles, and their surroundings by combining a non-immersive desktop view with a virtual reality view. It facilitates the analysis of passenger behavior, physiology, spatial interactions, and events within replications of study environments, employing avatars, trajectories, and heatmaps. To extend AutoVis and streamline interactions with it, we created a novel concept for gaze and gesture-supported analysis control. In addition, we conducted an exemplary use case study in the context of traffic accident reconstructions to explore the applicability of AutoVis apart from AUIs. By demonstrating these extensions, we contribute to the underexplored area of immersive analytics for AUIs and promote a more efficient and effective understanding of human-vehicle interaction.
The manner an HMI presents information can affect a user's experience in highly automated vehicles. This online experimental study analyzed how informing participants of future or current events and vehicle behaviors in various modalities affected trust and comfort. We observed that presenting users with information about upcoming road events and vehicle maneuvers lead to greater user trust and comfort, but only when also alerting users about the immediately occurring events. Psychological and human-robot interaction theories provided explanations for why a combination of current and future alerts may result in higher user trust. Understanding how content temporality impacts the individual has design implications that may potentially lead to an increased partnership and optimized interaction between a person and their highly automated vehicle.
The readability of Text-rich human machine interface (HMI) has been an increasingly pivotal for the automotive user interface (AUI) in China. The illegible typeface in AUI tends to beget drive distraction which may ramp up the incidence of traffic accidents. We devised a rapid test in two different typefaces for evaluating the readability of Simplified Chinese typefaces. The results show that readability in AUI could be significantly impacted by typeface. In addition, the result suggests that user performance could be partly affected by gender and driving frequency. Furthermore, the differences in the behaviors of glancing could be hardly narrowed when the text size was enlarged form 5mm to 6mm in driving context.
No abstract available
Electric vehicles (EVs) are emerging as the leading alternative to internal combustion engines (ICE) and offer an opportunity to curb emissions, reduce air pollution, and decrease fossil fuel dependence. Key to their successful adoption involves understanding consumer expectations and preferences for EV user interfaces (UIs). This study explores consumer preferences and acceptance of EV UIs, focusing on expectations, perceptions and preferences of aesthetics, design, function, and features. The work presented here is part of a larger project investigating EV adoption in the Australian market and presents a qualitative observation of participants interacting with EVs. Key areas of EV interiors that impact perception and preferences include center console and dashboard, driving controls, button quantity, and screen size. Furthermore, participants expect EVs to have fewer physical buttons, prefer uncluttered UIs, favour large screens, and desire physical buttons for critical functions. These insights guide future automotive UI designs, enhancing EV adoption.
In-car touchscreen infotainment systems have transformed traditional driving experiences into interface-driven user experiences by incorporating multiple features into the system. Although the innovation offers exciting new driving experiences, there are many potential safety issues that need to be addressed urgently.
Text-rich driver–vehicle interfaces are increasingly common in new vehicles, yet the effects of different typeface characteristics on task performance in this brief off-road based glance context remains sparsely examined. Subjects completed menu selection tasks while in a driving simulator. Menu text was set either in a ‘humanist’ or ‘square grotesque’ typeface. Among men, use of the humanist typeface resulted in a 10.6% reduction in total glance time as compared to the square grotesque typeface. Total response time and number of glances showed similar reductions. The impact of typeface was either more modest or not apparent for women. Error rates for both males and females were 3.1% lower for the humanist typeface. This research suggests that optimised typefaces may mitigate some interface demands. Future work will need to assess whether other typeface characteristics can be optimised to further reduce demand, improve legibility, increase usability and help meet new governmental distraction guidelines. Practitioner Summary: Text-rich in-vehicle interfaces are increasingly common, but the effects of typeface on task performance remain sparsely studied. We show that among male drivers, menu selection tasks are completed with 10.6% less visual glance time when text is displayed in a ‘humanist’ typeface, as compared to a ‘square grotesque’.
No abstract available
Navigation systems are a vital part of the current traffic system. Advancements in driving technology have brought about radical changes in driving behaviour and reduced routing time. However, there are also risks to users in terms of distraction or inattention. Driving in night possess more risk than in day time driving due to abuse of high beams of the head lamb. Similar to the night driving, travelling in foggy day is also difficult for all road user. As far as the disruptive effects of navigation systems are concerned, the empirical conclusions are heterogeneous. The project is aimed to develop a low and effective Advance driver assistance system which includes vehicle to vehicle communication and intelligent headlight control. The project also aims to study and analyse different multi-disciplinary techniques which include supervised machine learning techniques o effectively classify road surface conditions using data collected from smartphones to ensure a safe and comfortable driving. A Graphical User Interface was developed which increases the usability of the system. In particular, visual distraction caused by navigation systems in relation to map navigation was reviewed. The project aims to analyse the data in such a way as to improve road safety when using a navigation system in unfamiliar areas. The results show that less glances of more than 2 seconds were found on the navigation system while map navigation leads to higher off-road times.
Modern automotive infotainment systems offer a complex and wide array of controls and features through various interaction methods. However, such complexity can distract the driver from the primary task of driving, increasing response time and posing safety risks to both car occupants and other road users. Additionally, an overwhelming user interface (UI) can significantly diminish usability and the overall user experience. A simplified UI enhances user experience, reduces driver distraction, and improves road safety. Adaptive UIs that recommend preferred infotainment items to the user represent an intelligent UI, potentially enhancing both user experience and traffic safety. Hence, this paper presents a deep learning foundation model to develop a context-aware recommender system for infotainment systems (CARSI). It can be adopted universally across different user interfaces and car brands, providing a versatile solution for modern infotainment systems.
No abstract available
This SIG will explore issues related to the design of in-vehicle human-computer interfaces. A modern vehicle's human-computer interface often facilitates the basic operation of the vehicle, but also provides more advanced features, such as assistive cruise control and lane keeping. Furthermore, today's drivers and passengers frequently use brought-in devices, in order to access navigation instructions, and use non-driving related types of digital information such as social media. The SIG will explore how in-vehicle interfaces can facilitate safe interactions for all of the occupants of the vehicle, and how they can take advantage of connected vehicle technologies.
No abstract available
Modern automotive infotainment systems offer a complex and wide array of controls and features through various interaction methods. However, such complexity can distract the driver from the primary task of driving, increasing response time and posing safety risks to both car occupants and other road users. Additionally, an overwhelming user interface (UI) can significantly diminish usability and the overall user experience. A simplified UI enhances user experience, reduces driver distraction, and improves road safety. Adaptive UIs that recommend preferred infotainment items to the user represent an intelligent UI, potentially enhancing both user experience and traffic safety. Hence, this paper presents a deep learning foundation model to develop a context-aware recommender system for infotainment systems (CARSI). It can be adopted universally across different user interfaces and car brands, providing a versatile solution for modern infotainment systems. The model demonstrates promising results in identifying driving contexts and providing contextually appropriate UI item recommendations, even for previously unseen users. Furthermore, the model’s performance is evaluated with fine-tuning to assess its ability to make personalized recommendations to new users.
No abstract available
In this work, we present ICEBOAT an interactive tool that enables automotive UX experts to explore how users interact with In-Vehicle Information Systems (IVISs). Based on large naturalistic driving data continuously collected from production line vehicles, ICEBOAT visualizes drivers’ interactions and driving behavior on different levels of detail. Hence, it allows to easily compare different user flows based on performance- and safety-related metrics.
Autonomous vehicles (AVs) are a disruptive mode of transportation that is rapidly advancing. However, it is widely acknowledged in industry and academia that AVs may not be capable of handling every traffic situation independently, necessitating remote human intervention. Existing teleoperation methods face significant challenges, highlighting the need for innovative remote operation approaches. One such approach is tele-assistance, where remote operators (ROs) offer high-level guidance while delegating low-level controls to automation. Our research focuses on designing a tele-assistance interface. By interviewing 14 AV teleoperation experts and conducting an elicitation study with 17 experienced teleoperators, we identify road scenarios requiring remote human assistance. We then devise a set of discrete high-level commands that enable the resolution of these scenarios without manually controlling AVs. Finally, we integrate these findings into the design of a novel user interface for teleoperating autonomous vehicles.
Advancements in user interface technologies and demands of design engineering led to increasing implementation of large and mostly flat interactive surfaces in automotive. Recent discussions in the context of in-vehicle usage of touchscreens advocate for the use of haptic feedback to restore the explore- and feel-qualities typically experienced in traditional physical button interfaces that contribute to intuitive, eyes-free, and tactually rich interactions. Haptic technologies that include a friction modulation approach seem especially promising to convey a high-quality feeling. This research reports an experience-oriented evaluation of an electrostatic friction haptic display in an in-vehicle direct touch interaction context. The evaluation was based on an automotive multitask setting (primary driving-task and secondary target-selection-task) with a 2 × 2 feedback modality design (factors haptic/audio with levels absent/present). The objective variables (response time, errors, and performance on the primary task) did not differ between feedback modalities. Any additional feedback to a visual baseline enhanced the user experience, with the multimodal feedback being preferred by most participants. Surface haptics was perceived as a novel yet unexpected type of haptic feedback. We discuss the implications for the haptic design of programmable friction displays and provide an initial set of guidelines for this innovative technology.
With the ongoing integration of advanced technologies into modern vehicle systems, understanding user interaction becomes a critical factor for safe and intuitive operation—especially in the transition towards autonomous driving. This article uncovers user-reported challenges of UX and in-vehicle UIs. The analysis is based on quantitative and qualitative evaluations of user-generated content (UGC) from automotive-focused online forums. The quantitative analysis is conducted by Natural Language Processing (NLP), while qualitative evaluation is performed through Mayring, applying a deductive–inductive category formation approach. The study investigates challenges related to interface complexity, driver distraction, and missing user diversity in the context of increasing digitalization. Based on the analysis, a set of practical design implications is presented, emphasizing context-sensitive function reduction, multimodal interface concepts, and UX strategies for reducing complexity. It has become evident that UX concepts in the automotive context can only succeed if they are adaptive, safety-oriented, and tailored to the needs of heterogeneous user groups. This leads to the development of an interaction strategy model, serving as a transitional framework for guiding the shift from manual to fully automated driving scenarios. The paper concludes with an outlook on further research to validate and refine the implications and UX framework.
The automotive evolution in virtual controls for touchscreen interaction provides the opportunity to manage and manipulate In-vehicle Infotainment (IVI) system without the need for large physical control. However, as most of these virtual controls are designed for visual feedback in PCs and mobile devices, their implementation can have usability and accessibility constraints in a moving vehicle. In fact, for some controls the interaction primitives may be substantially different from the physical versions (i.e., multi-finger knobs, single finger dials etc.), therefore requiring drivers to remaster the mechanics of virtual interaction to properly utilize these controls on a touchscreen surface. Although, some IVI systems now include basic vibrotactile feedback which may only provide abstract confirmation of triggers or events, but this technique may not be ideal for calibrated tactile or textural output in a moving vehicle. Recently, electrostatic or electrovibration feedback has been proposed for touchscreen interaction which can augment the systems with clear and precise textures rendered on the touchscreen. As this technology is relatively new and may have certain limitations, it is important to understand how the usability of current graphical user interfaces (GUIs) controls augmented with electrostatic feedback may improve touchscreen interaction. This research study looks at 8 common GUI controls adapted for touchscreen surfaces primarily for visual interaction and augments them with vibrotactile and electrostatic feedback. The goal of the study is to understand which type of controls are suitable for visual only interaction, and which controls require basic tactile feedback (vibration confirmation), while identifying the GUI controls that may be most effectively utilized in the presence of electrostatic tactile feedback on the touchscreen using friction variation.
Employing a 2x2 within-subjects design, forty-eight experienced drivers (28 male, 20 female) undertook repeated button selection and 'slider-bar' manipulation tasks, to compare a traditional touchscreen with a virtual mid-air gesture interface in a driving simulator. Both interfaces were tested with and without haptic feedback generated using ultrasound. Results show that combining gestures with mid-air haptic feedback was particularly promising, reducing the number of long glances and mean off-road glance time associated with the in-vehicle tasks. For slider-bar tasks in particular, gestures-with-haptics was also associated with the shortest interaction times, highest number of correct responses and least 'overshoots', and was favoured by participants. In contrast, for button-selection tasks, the touchscreen was most popular, enabling the highest accuracy and quickest responses, particularly when combined with haptic feedback to guide interactions, although this also increased visual demand. The study shows clear potential for gestures with mid-air ultrasonic haptic feedback in the automotive domain.
Against the backdrop of escalating intelligent driving technology, the challenge for human-machine interface (HMI) design is to accurately define diverse and individualised customer requirements (CRs), as well as to ensure the stability, usability and competitive advantage of the design solution. HMI design methods that address these issues have not been thoroughly studied. To address this challenge, this study proposes a HMI design methodology that integrates fuzzy Kano features, quality function deployment (QFD) and physiological experiments (eye-tracking, electroencephalogram and electrocorticographic activity) within a human-centred design (HCD) framework. The method is robust, efficient, rapidly iterative and widely applicable in HMI design. The advantages of the proposed methodology have been demonstrated and evaluated with examples of navigational interface design to give a clear understanding. This methodology will enhance HMI design and make creating user-friendly interfaces for intelligent vehicles safer, simpler and more efficient.
Voice user interfaces (VUI) are becoming indispensable in car for offering drivers the opportunity to make distraction-free inputs and conduct complex tasks. However, the usability and control efficiency of today’s VUI remain to be enhanced due to its sequential nature. In this work, we explored gestural input on the steering wheel to improve the interaction efficiency of VUI. Based on limitations in VUI, we designed novel gestural commands on the steering wheel to augment them. We also elicited corresponding user-defined gestures by exploring drivers’ touch behavior. Then, we implemented a prototype attached to the steering wheel for recognizing gestures. Finally, we evaluated our system’s usability regarding driving performance, interaction efficiency, cognitive workload and user feedback. Results revealed that our system improved the control efficiency of VUI and reduced workload without a significant reduction in driving distraction than just using VUI.
Modern vehicles are complex working environments and feature a multitude of functionalities. This applies in particular to commercial vehicles, which are equipped with even more functions than passenger cars. Research showed the potential of context-adaptive user interfaces for reducing the complexity of the human-machine interaction but has focused on passenger cars. Therefore, a context-adaptive touchscreen-based system is conceptualized specifically for commercial vehicles such as trucks based on existing findings and design guidelines. Acknowledging the importance of being able to gather early user feedback for allowing fast iteration cycles, an interactive prototype was implemented, and a modular study setup developed. This combination was tested in an initial user study, which evaluated the usability as well as user experience in terms of the novel context-adaptive interface.
Touchless Selection Schemes for Intelligent Automotive User Interfaces With Predictive Mid-Air Touch
Predictive touch technology aims to improve the usability and performance of in-vehicle displays under the influence of perturbations due to the road and driving conditions. It fundamentally relies on predicting and early in the freehand pointing movement, the interface item the user intends to select, using a novel Bayesian inference framework. This article focusses on evaluating facilitation schemes for selecting the predicted interface component whilst driving, and without physically touching the display, thus touchless. Initially, several viable schemes were identified in a brainstorming session followed by an expert workshop with 12 participants. A simulator study with 24 participants using a prototype predictive touch system was then conducted. A number of collected quantitative and qualitative measures show that immediate mid-air selection, where the system autonomously auto-selects the predicted interface component, may be the most promising strategy for predictive touch.
No abstract available
ABSTRACT Visual distraction by secondary in-car tasks is a major contributing factor in traffic incidents. In-car user interface design may mitigate these negative effects but to accomplish this, design factors’ visual distraction potential should be better understood. The effects of touch screen size, user interface design, and subtask boundaries on in-car task's visual demand and visual distraction potential were studied in two driving simulator experiments with 48 participants. Multilevel modeling was utilized to control the visual demands of driving and individual differences on in-car glance durations. The 2.5” larger touch screen slightly decreased the in-car glance durations and had a diminishing impact on both visual demand and visual distraction potential of the secondary task. Larger relative impact was discovered concerning user interface design: an automotive-targeted application decreased the visual demand and visual distraction potential of the in-car tasks compared to the use of regular smartphone applications. Also, impact of subtask boundaries was discovered: increase in the preferred number of visual or visual-manual interaction steps during a single in-car glance (e.g., pressing one button vs. typing one word) increased the duration of the in-car glance and its visual distraction potential. The findings also emphasize that even if increasing visual demand of a task – as measured by in-car glance duration or number of glances – may increase its visual distraction potential, these two are not necessarily equal.
One potential contributor to mitigating the CO2 emissions caused by road transport is eco-driving. Ecodriving encompasses all driver behaviors performed to reduce the vehicle's energy consumption. Drivers' optimal on-road interaction with the kinetic energy resources is particularly relevant for eco-driving success. Hence, the question is what information do drivers require to optimally interact with the kinetic energy resources? We conducted ten interviews with hybrid electric vehicle (HEV) eco-drivers who actively interact with kinetic energy resources on a daily basis. From these interviews, a set of information requirements was derived. Further steps will comprise the development and testing of an interface prototype based on these information requirements.
We describe ultra-high sensitive force sensor on flat display that can detect and differentiate between feather touch and press touch or tapping. The proposed unique sensor pattern design and pressure-sensitive material enables ultra-high force sensitivity, multi force detection (10 points). The minimum force that can be detected is 25g with cover film and 300g with thick cover glass. The proposed technology provides on-screen user interface with extremely good performance for automotive application. Moreover, with integrated force function, we can realize "error-free input", "adaptive input" and "any object input" interface on flat display.
The rapid evolution of artificial intelligence (AI) and multimodal interaction technologies is reshaping automotive design, demanding new frameworks that prioritize user experience (UX) and market applicability. This conceptual study proposes an integrative framework that combines AI-driven personalization, multimodal interface design (e.g., voice, gesture, and touch), and real-time UX evaluation mechanisms. Drawing upon human-centered design principles and theories of user acceptance, the framework addresses current gaps in adaptive, intelligent vehicle interface systems. It further outlines strategic pathways for deployment in diverse market environments through an evaluation model that accounts for technological scalability, cultural preferences, and demographic diversity. The study concludes by identifying key directions for future research, particularly emphasizing cross-cultural UX testing across various vehicle types and user groups. The proposed framework contributes to both academic discourse and industry practice, offering a foundation for the next generation of intelligent, user-centric automotive systems.
No abstract available
No abstract available
Background of study: The rapid development of IoT-enabled systems has transformed user interaction by enabling intelligent, responsive, and interconnected digital environments. However, existing studies often emphasize traditional usability factors while overlooking emerging interaction attributes essential for next-generation Digital Twin and IoT-based interfaces. Aims and scope of paper: This study aims to investigate next-generation Digital Twin user experience (UX) by exploring interactive IoT design attributes, including gesture-based interaction, gaze tracking, multimodal interfaces, and AR-assisted usability. The research also develops an enhanced usability framework that integrates efficiency, cognitive load, and user satisfaction metrics. Methods: Using a mixed-method approach, the study integrates quantitative evaluations (task completion time, error rates) and qualitative assessments (NASA-TLX, SUS). Data were collected from open-source IoT usability datasets and supported by prototype testing, including touch-based, voice-assisted, gesture-controlled, and AR-enhanced interfaces. Result: Findings show that AR-enhanced and touch-based interfaces significantly improve task efficiency, reduce cognitive load, and increase user satisfaction. Gesture-based systems, while offering immersive interaction, exhibit higher error rates and cognitive strain. Users also expressed concerns regarding data security and interface complexity in IoT-enabled environments. Conclusion: IoT-enabled Digital Twin interaction offers substantial improvements in usability and engagement, particularly through AR and touch-based designs. However, challenges persist in gesture accuracy, voice recognition consistency, and privacy risks. This research establishes a structured framework for future IoT-UX development, emphasizing adaptive, intuitive, and user-centered design principles.
This paper proposes a cross-reality user experience (UX) design framework for digital twin systems deployed across Augmented Reality (AR), Virtual Reality (VR), and web platforms. Ensuring consistent UX across these heterogeneous platforms is critical for maintaining user familiarity and reducing cognitive load when interacting with digital twins. We first review the concepts of digital twins and cross-reality (XR) environments, highlighting the need for unified design strategies. The framework emphasizes visual design consistency, interaction mapping, and context-appropriate adaptation to bridge real and virtual environments. We derive design principles from existing XR design guidelines and cross-reality design patterns, and formulate a structured process for implementing a unified design system that spans AR/VR interfaces and conventional web dashboards. As a case study in the architecture domain, we illustrate how a building’s digital twin can provide a seamless user experience whether accessed through an AR on-site overlay, an immersive VR simulation, or a web platform. The proposed framework suggests practical standardization possibilities for XR UX design, potentially serving as a guideline for industry adoption.
The landscape of digital product design has seen major changes as companies increasingly acknowledge the vital importance of structured user experience approaches. Modern software development practices focus on creating core design principles that steer decision-making throughout product lifecycles. This marks a significant shift from conventional development methods that favored technical functionality over user-focused aspects. The Terra design system showcases advanced principle-driven development with its all-encompassing five-attribute framework, which includes Clear, Efficient, Smart, Connected, and Polished traits. Each principle incorporates specific implementation examples, including prioritizing accessibility to support all users, using consistent language that matches users' mental models, addressing ambiguity through tooltips and contextual help, maintaining conciseness for information focus, and writing clear error messages that identify resolution paths. The Clear principle establishes rules for alleviating cognitive load via thoughtful information display and interface streamlining. The Efficient principle focuses on optimizing workflows by systematically removing unnecessary interaction stages. The Smart principle incorporates intelligent system features to offer contextually appropriate support. The Connected principle guarantees smooth data synchronization and cohesive design language application across platforms. The Polished principle highlights careful focus on visual design elements and aesthetic excellence. Comprehensive measurement methodologies, including Customer Effort Score, System Usability Scale, qualitative usability testing, and behavioral metrics, serve as primary evaluation frameworks for analyzing principal effectiveness, complemented by additional metrics such as Net Promoter Score, Customer Satisfaction Score, and Pragmatic Usability Rating by Experts. Implementation demands tactical incorporation into development practices, interdisciplinary cooperation, quality control measures, and management of organizational change. Organizations that adopt structured design frameworks show quantifiable enhancements in user engagement metrics, development efficiency, and overall product quality when compared to those that depend on inconsistent design methods.
No abstract available
No abstract available
No abstract available
In this paper, we present the concept of multilayer cross-platform graphics sharing in the automotive digital cockpit. Considering that automobiles today have around 150 ECUs (engine control units), managing all these ECUs is becoming a challenging task. For example, there is a controller (System on Chip - SoC) for every display in an automobile. This SoC is used for content rendering and data processing. The number of ECUs can be lowered by using SoCs with a hypervisor. A hypervisor is a concept that enables us to run two operating systems on one SoC in real-time. The content from both operating systems can be rendered and presented in the same display output. The proposed system consists of one SoC with two operating systems running on a hypervisor. With this proposed solution, we were able to simultaneously render content from both operating systems on one display output. The proposed solution also covers the rendering of media content on display that is hosted on a different operating system and therefore enables mixed criticality where safety-critical information, such as those presented in the cluster, are presented with no interference with the non-critical operations, such as media rendering. We also evaluate safety concerns and system performance when content is rendered simultaneously on both operating systems.
The advanced digital cockpit system we are developing now recognizes the driver’s condition and emotions and analyzes the driver’s behavior to give appropriate visual and audible warnings for the driver to drive safely and helps the driver to relax the negative condition and feelings through the air conditioning system. In this paper, we introduce the concept of the driver behavior analysis and warning system on the digital cockpit which recognizes the driving style and dangerous driving behavior using driving information of the car and inertial information of the smart device and provides appropriate warnings for the driver to drive safely. Currently, we are conducting experiments to recognize the driving style using the driving data onto a fixed course, and next year we will carry out experiments to recognize dangerous driving behavior using naturalistic driving data.
No abstract available
In this paper we introduce the conceptual design of vehicular digital cockpit that adaptively changes contents of the voice guidance and the digital cluster according to the driver ’s age, gender and driving experience. Based on this conceptual design, we propose the necessary functional requirements to realize driver-adaptive human machine interface. In order to complete the digital cockpit adapted to the driver, it is necessary to measure and learn about the driver’s condition, driving ability, and the cognitive response to the human machine interfaces as well as adapting to driver’s preference and characteristics.
No abstract available
Intelligent cockpit is a complete system composed of different cock-pit electronics, and its key technologies are mainly composed of four parts. However, at present, the technical realization effect of intelligent cockpit products can not meet the needs of users well. The existing product model can’t meet the needs of digital development and application, and the relationship between product model and process model is established by using the product digital development verification model. In view of the shortcomings of the traditional flexible bench that can’t satisfy the safety of human-computer interaction and the adaptability of complex environment, a new type of flexible bench for human-computer interaction in intelligent cockpit is proposed and designed by using the magnetorheological effect of magnetorheological fluid. This paper analyzes the operation mode, control method and direct torque control technology of AC electric dynamometer, which provides a reliable test platform for the experimental research of transmission system efficiency.
No abstract available
The evolution of vehicle interiors is moving towards a seamless integration of smart technologies, enhancing user experience (UX) for drivers and passengers. The Sunrise concept, developed by Antolin in collaboration with VIA optronics, showcases a new paradigm in automotive cockpit design thanks to combining advanced display integration, optical bonding, and sustainable materials. This paper discovers the technological advancements behind the Sunrise cockpit, highlighting its applications for both manual and autonomous driving modes. Additionally, we examine its role in improving safety, sustainability and customization for future mobility solutions. he transformation of vehicle interiors is being driven by the integration of smart technologies, as exemplified by Antolin's Sunrise concept in partnership with VIA Optronics, which redefines cockpit design through innovative displays and materials while focusing on safety, sustainability, and customization for various driving modes.
ABSTRACT Fully autonomous or “self-driving” vehicles are an emerging technology that may hold tremendous mobility potential for individuals who are visually impaired who have been previously disadvantaged by an inability to operate conventional motor vehicles. Prior studies however, have suggested that these consumers have significant concerns regarding the accessibility of this technology and their ability to effectively interact with it. We present the results of a quasi-naturalistic study, conducted on public roads with 20 visually impaired users, designed to test a self-driving vehicle human–machine interface. This prototype system, ATLAS, was designed in participatory workshops in collaboration with visually impaired persons with the intent of satisfying the experiential needs of blind and low vision users. Our results show that following interaction with the prototype, participants expressed an increased trust in self-driving vehicle technology, an increased belief in its likely usability, an increased desire to purchase it and a reduced fear of operational failures. These findings suggest that interaction with even a simulated self-driving vehicle may be sufficient to ameliorate feelings of distrust regarding the technology and that existing technologies, properly combined, are promising solutions in addressing the experiential needs of visually impaired persons in similar contexts.
Recent reports have suggested that most self-driving vehicle technology being developed is not currently accessible to users with disabilities. We purport that this problem may be at least partially attributable to knowledge gaps in practice-oriented user-centered design research. Missing, we argue, are studies that demonstrate the practical application of user-centered design methodologies in capturing the needs of users with disabilities in the design of automotive systems specifically. We have investigated user-centered design, specifically the use of personas, as a methodological tool to inform the design of a self-driving vehicle human-machine interface for blind and low vision users. We then explore the use of these derived personas in a series of participatory design sessions involving visually impaired co-designers. Our findings suggest that a robust, multi-method UCD process culminating with persona development may be effective in capturing the conceptual model of persons with disabilities and informing the design of automotive system.
Background: Semi-autonomous vehicles still require human drivers to take over when the automated systems can no longer perform the driving task. Objective: The goal of this study was to design and test the effects of six meaningful tactile signal types, representing six driving scenarios (i.e., navigation, speed, surrounding vehicles, over the speed limit, headway reductions, and pedestrian status) respectively, and two pattern durations (lower and higher urgencies), on drivers’ perception and performance during automated driving. Methods: Sixteen volunteers participated in an experiment utilizing a medium-fidelity driving simulator presenting vibrotactile signals via 20 tactors embedded in the seat back, pan, and belt. Participants completed four separate driving sessions with 30 tactile signals presented randomly throughout each drive. Reaction times (RT), interpretation accuracy, and subjective ratings were measured. Results: Results illustrated shorter RTs and higher intuitive ratings for higher urgency patterns than lower urgency patterns. Pedestrian status and headway reduction signals were associated with shorter RTs and increased confidence ratings, compared to other tactile signal types. Lastly, among six tactile signals, surrounding vehicle and navigation signal types had the highest interpretation accuracy. Conclusion: These results will be used as preliminary data for future studies that aim to investigate the effects of meaningful tactile displays on automated vehicle takeover performance in complex situations (e.g., urban areas) where actual takeovers are required. The findings of this study will inform the design of next-generation in-vehicle human-machine interfaces.
Connected vehicle (CV) technology aims to improve drivers’ situational awareness through audible and visual warnings displayed on a human–machine interface (HMI), thus reducing crashes caused by human error. This paper developed a driving simulator test bed to assess the readability and usefulness of the Wyoming CV applications. A total number of 26 professional drivers were recruited to participate in a driving-simulator study. Prior to driving the simulator, the participants were trained on both the concept of CV technology and the developed CV applications as well as the operation of the driving simulator. Three driving simulation scenarios were designed. For each scenario, participants drove two times: one with the HMI turned on and another one with the HMI turned off. After driving the simulator, a comprehensive revealed-preference survey was employed to collect the participants’ perceptions of CV technology and Wyoming CV applications. Results show that the Wyoming CV applications were most favored under poor-visibility driving conditions. Among the Wyoming CV applications, forward collision warning and rerouting applications were experienced as the most useful. Approximately 89% of the participants stated that the Wyoming CV applications provided them with improved road condition information and increased their experienced safety while driving; 65% of the participants stated the CV applications and the HMI did not introduce distraction from the primary task of driving. Finally, this paper concludes that the design of CV HMI needs to balance a trade-off between the readability of the warnings and drivers’ capability to safely recognize and timely respond to the received warnings.
Pedestrian-to-vehicle (P2V) warning technology is expected to reduce pedestrian crashes and improve roadway safety. Previous studies have demonstrated the effectiveness of P2V; however, compared with a general P2V human–machine interface (HMI) design adopted in these studies, the necessity of applying different HMI designs specific to driving scenarios remains uncertain. To resolve the issue, this study conducted a driving simulator experiment to test the performance of various P2V HMI designs considering scenario heterogeneity. Two aspects of the HMI design, that is, the warning urgency level and warning content, were tested in five pedestrian pre-crash scenarios. The warning urgency level is categorized into two types, a “gradually changed” warning and an “emergency” warning, and the warning content focuses on either providing scenario-based distance information as a supplement or not. Data from 36 participants were collected in the study. The results show that using a “gradually changed” warning design can help a driver make gradual driving adjustments to the upcoming conflict, which improves the driving performance; in addition, providing scenario-based distance information can increase the safety buffer. Additionally, insights about driver features’ effects on P2V HMI design were also proposed. Drivers who had been in a not-at-fault crash before and their experience related to the advanced driver assistance system would interact with the P2V influence. This study’s findings have practical implications for both automobile manufacturers and researchers.
Connected vehicle (CV) technology aims to improve drivers’ situational awareness through audible and visual warnings, commonly displayed on a human–machine interface (HMI), thus reducing the likelihood of crashes caused by human error. Nevertheless, the presence of an in-vehicle CV HMI may pose an increasing threat to driver distraction, particularly for truck drivers and under high workload driving conditions. With this concern, this research investigated the effects of a HMI developed by the Wyoming Department of Transportation CV Pilot on truck drivers’ cognitive distraction and driving behavior through a driving simulator experiment. Revealed preference survey and vehicle dynamics data were employed to assess the cognitive distractions of the Pilot’s HMI. Simulation results indicated that when CV warnings were displayed on the HMI, they did not introduce significant effects on participants’ longitudinal and lateral control of the vehicle. Nevertheless, from the revealed preference survey, it was found that approximately 27% of the participants indicated that the CV HMI tended to introduce additional visual workload for them, particularly when approaching an active freeway work zone under reduced visibility condition. In this regard, this research pointed out that the design of CV warnings and HMI displays needs to incorporate drivers’ ability to recognize and react safely to the received CV warnings to minimize the cognitive distractions introduced by the CV HMI.
Self-driving vehicles are the latest innovation in improving personal mobility and road safety by removing arguably error-prone humans from driving-related tasks. Such advances can prove especially beneficial for people who are blind or have low vision who cannot legally operate conventional motor vehicles. Missing from the related literature, we argue, are studies that describe strategies for vehicle design for these persons. We present a case study of the participatory design of a prototype for a self-driving vehicle human-machine interface (HMI) for a graduate-level course on inclusive design and accessible technology. We reflect on the process of working alongside a co-designer, a person with a visual disability, to identify user needs, define design ideas, and produce a low-fidelity prototype for the HMI. This paper may benefit researchers interested in using a similar approach for designing accessible autonomous vehicle technology.
No abstract available
Driving simulator validation is an important and ongoing process. Advances in in-vehicle human machine interfaces (HMI) mean there is a continuing need to reevaluate the validity of use cases of driving simulators relative to real world driving. Along with this, our tools for evaluating driver demand are evolving, and these approaches and measures must also be considered in evaluating the validity of a driving simulator for particular purposes. We compare driver glance behavior during HMI interactions with a production level multi-modal infotainment system on-road and in a driving simulator. In glance behavior analysis using traditional glance metrics, as well as a contemporary modified AttenD measure, we see evidence for strong relative validity and instances of absolute validity of the simulator compared to on-road driving.
Human Machine Interfaces (HMIs) enable the communication between humans and machines. In the automotive domain, all in-vehicle systems used to be independent. Today they are more and more interconnected and interdependent. However, they still don't act in unison to help drivers achieve their individual goals. More specifically, even though, some current HMIs provide a certain degree of personalization, they don't adapt dynamically to the situation and don't learn driver-specific nuances in order to improve the driver's user experience.
No abstract available
Human–Machine Interfaces (HMIs) in passenger cars have become more complex over the years, with touch screens replacing physical buttons and with layered menu-structures. This can lead to distractions. The purpose of this study is to investigate how often vehicle controls are used while driving and which underlying factors contribute to usage. Thirty drivers were observed during driving a familiar route twice, in their own car and in an unfamiliar car. In a 2 × 1 within-subject design, the experimenter drove along with each participant and used a predefined checklist to record how often participants interacted with specific functions of their vehicle while driving. The results showed that, in the familiar car, direction indicators are the most frequently used controls, followed by adjusting radio volume, moving the sun visor, adjusting temperature and changing wiper speed. Factors that influenced task frequencies included car familiarity, gender, age and weather conditions. The type of car also appears to impact task frequency. Participants interacted less with the unfamiliar car, compared to their own car, which may indicate drivers are regulating their mental load. These results are relevant for vehicle HMI designers to understand which functions should be easily and swiftly available while driving to reduce distraction by the HMI design.
No abstract available
The effectiveness of the human-machine interface (HMI) in a driving automation system during takeover situations is based, in part, on its design. Past research has indicated that modality, specificity, and timing of the HMI have an impact on driver behavior. The objective of this study was to examine the effectiveness of two HMIs, which vary by modality, specificity, and timing, on drivers' takeover time, performance, and eye glance behavior. Drivers' behavior was examined in a driving simulator study with different levels of automation, varying traffic conditions, and while completing a non-driving related task. Results indicated that HMI type had a statistically significant effect on velocity and off-road eye glances such that those who were exposed to an HMI that gave multimodal warnings with greater specificity exhibited better performance. There were no effects of HMI on acceleration, lane position, or other eye glance metrics (e.g., on road glance duration). Future work should disentangle HMI design further to determine exactly which aspects of design yield between safety critical behavior.
In order to study the influence of in-vehicle human-machine interface icons on drivers in different driving environments so as to improve the information processing ability and comfort of drivers when interacting with the in-vehicle human-machine interface, the study analyzed and combined the icon foreground/background color combination, icon border shape and icon area ratio, and obtains data through eye-tracking experiments in virtual reality environment with and without tunnel driving environments, and analyzed that the Bluish-purple/ Sky Blue combination of high contrast color and high area ratio of icons was suitable for in-vehicle human-machine interface, which is beneficial for future car manufacturers to design modes for different driving environments.
In the case of vehicles with low speeds at the time of pedestrian fatality, the percentage of pedestrian collisions was the highest for right turns, yet the mechanism of these traffic accidents has not been clarified. In this study, we investigate the behavioral characteristics of drivers when a vehicle makes a right turn in five situations using a driving simulator. We conducted an experiment using a driver assistance system that alerted drivers when the system detected pedestrians at the intersection. A human–machine interface (HMI) was first displayed when the subject vehicle (ego vehicle) stopped in front of the intersection due to a red light. The display was then turned off when the traffic light changed to green, and the ego vehicle started moving. It was displayed again when the ego vehicle entered the intersection. We found that HMI display was effective in increasing the percentage of driver’s gazing time at pedestrians and in ensuring safety by the vehicle’s stopping to move forward. Furthermore, we found that HMI’s effectiveness was the most significant in the situation when three preceding vehicles made a right turn.
This study proposes a Human Machine Interface (HMI) system with adaptive visual stimuli to facilitate teleoperation of industrial vehicles such as forklifts. The proposed system estimates the context/work state during teleoperation and presents the optimal visual stimuli on the display of HMI. Such adaptability is supported by behavioral models which are developed from behavioral data of conventional/manned forklift operation. The proposed system consists of two models, i.e., gaze attention and work state transition models which are defined by gaze fixations and operation pattern of operators, respectively. In short, the proposed system estimates and shows the optimal visual stimuli on the display of HMI based on temporal operation pattern. The usability of teleoperation system is evaluated by comparing the perceived workload elicited by different types of HMI. The results suggest the adaptive attention-based HMI system outperforms the non-adaptive HMI, where the perceived workload is consistently lower as responded by different categories of forklift operators.
This study presents the integration of Embedded Large Language Models (LLMs), such as ChatGPT, with a hardware-based system to create an enhanced Human-Machine Interface (HMI) for autonomous vehicles. The system leverages various components, including a ChatGPT voice processor, ATMega328 controller, microphone, audio codec, speaker, and digital amplifier, to enable intuitive voice interactions between passengers and the autonomous vehicle. This setup allows real-time communication for navigation assistance, vehicle diagnostics, and emergency handling. Key hardware components include GPS for location tracking, accident/mechanic switches for manual alerts, a fire sensor for safety monitoring, and a relay with a driver module for controlling vehicle operations. The system is further connected to external networks through a GSM modem, facilitating emergency alerts and remote monitoring via IoT. The ATMega328 controller acts as the central coordinator, interfacing between these sensors and the LLM, while the Arduino IDE is used to program and manage the control logic. The inclusion of IoT capabilities enables real-time data transmission and system monitoring from remote locations, enhancing safety and efficiency. By embedding ChatGPT, the system transforms traditional HMIs, offering a more responsive, voice-based interface that provides situational awareness, personalized responses, and seamless operation, improving passenger experience and trust in autonomous vehicle technologies.
No abstract available
The electrification of vehicles is without a doubt one of the milestones of today’s automotive technology. Even though industry actors perceive it as a future standard, acceptance, and adoption of this kind of vehicles by the end user remain a huge challenge. One of the main issues is the range anxiety related to the electric vehicle’s remaining battery level. In the scope of the H2020 ADAS&ME project, we designed and developed an intelligent Human Machine Interface (HMI) to ease acceptance of Electric Vehicle (EV) technology. This HMI is mounted on a fake autonomous vehicle piloted by a hidden joystick (called Wizard of Oz (WoZ) driving). We examined 22 inexperienced EV drivers during a one-hour driving task tailored to generate range anxiety. According to our protocol, once the remaining battery level started to become critical after manual driving, the HMI proposed accurate coping techniques to inform the drivers how to reduce the power consumption of the vehicle. In the following steps of the protocol, the vehicle was totally out of battery, and the drivers had to experience an emergency stop. The first result of this paper was that an intelligent HMI could reduce the range anxiety of the driver by proposing adapted coping strategies (i.e., transmitting how to save energy when the vehicle approaches a traffic light). The second result was that such an HMI and automated driving to a safe spot could reduce the stress of the driver when an emergency stop is necessary.
Cooperative driving in a Connected Vehicle (CV) environment has received increasing attention over the years due to its ability to enhance driving safety and efficiency. For a cooperative driving task that requires speed adaptation, the recommended speed is displayed through human-machine interface (HMI). The design of HMI is expected to affect drivers’ compliance and understanding of recommended speed and thus impact the driving performance. However, the effects of HMI design on cooperative driving performance are yet to be explored. In this study, three HMIs were designed: <inline-formula><tex-math notation="LaTeX">$\Delta {\bm{v}}$</tex-math></inline-formula> HMI (baseline, displays the speed difference between optimal and current driving speed), <inline-formula><tex-math notation="LaTeX">$\Delta {\bm{t}}$</tex-math></inline-formula> HMI (displays the time difference between optimal and estimated arrival time), and <inline-formula><tex-math notation="LaTeX">$\Delta {\bm{v}}$</tex-math></inline-formula>-graphic HMI (displays the speed difference using a variable graphic form). The HMI designs follow the skills, rules, and knowledge (SRK) taxonomy. To test the effects of the HMI design on cooperative driving performance, we developed a multi-driver driving co-simulation platform, which can simulate a cooperative driving environment and thus verify the HMI design. Driving simulator experiment shows that the <inline-formula><tex-math notation="LaTeX">$\Delta {\bm{t}}$</tex-math></inline-formula> HMI deteriorated the speed adaptation accuracy, while the <inline-formula><tex-math notation="LaTeX">$\Delta {\bm{v}}$</tex-math></inline-formula>-graphic HMI improved the driving performance. The questionnaire data revealed that the participants preferred the <inline-formula><tex-math notation="LaTeX">$\Delta {\bm{v}}$</tex-math></inline-formula>-graphic HMI, as it was highly rated in terms of system usability and user experience. The findings of this study provide insights for the HMI design for cooperative driving applications.
This study investigated the effect of display response delays, i.e., the time lag between operation input and display output, on human–machine interface (HMI) device operation, emphasizing commander-type devices, e.g., those used for in-vehicle infotainment. While operating the HMI device, simultaneous processing of visual and tactile information, e.g., operating the device while gazing at the display, is required. This investigation was conducted to clarify how varying response delays in visual feedback after tactile feedback with device operation affect usability in terms of various aspects, e.g., task efficiency, subjective workload, and the sense of agency (SoA). This study employed a targeting task, in which the participants engaged in HMI device operation under a delay time ranging from 0–120 ms. The subjective workload was evaluated using NASA Task Load Index (NASA-TLX) scale, and the SoA was assessed quantitatively through Near-infrared Spectroscopy (NIRS) measurements of cerebral hemodynamics, which provide insights into the user’s psychological and physiological responses to delay. The findings revealed a nuanced relationship between response delays and usability during HMI device operation. We found that slight delays (40– 80 ms) do not necessarily detract from user workload or efficiency. Instead, the slight response delays can enhance the user’s SoA, which suggests that there is an optimal delay window that balances immediate feedback with the cognitive and sensory processing times inherent to human perception. These findings suggest that incorporating slight delays within the software framework of HMI system design effectively enhances the SoA, thereby enabling more intuitive user interactions and improving usability.
This study explores the conceptual design of virtual avatars in autonomous driving and their trigger mechanisms in the context of the Intelligent Human-Machine Interface (HMI). Through the intelligent system's perception of the situation, virtual avatars can be dynamically triggered based on vehicle status, environmental information, and driver behavior, enabling precise information delivery and real-time feedback. The research indicates that users have high expectations for virtual avatars in terms of safety notifications, emotional expression, and personalized design. This study provides a reference for the design of virtual avatars in future autonomous driving systems and proposes prospects for further adaptive optimization.
Driving across Markets: An Analysis of a Human-Machine Interface in Different International Contexts
The design of automotive human–machine interfaces (HMIs) for global consumers’ needs to cater to a broad spectrum of drivers. This paper comprises benchmark studies and explores how users from international markets—Germany, China, and the United States—engage with the same automotive HMI. In real driving scenarios, N = 301 participants (premium vehicle owners) completed several tasks using different interaction modalities. The multi-method approach included both self-report measures to assess preference and satisfaction through well-established questionnaires and observational measures, namely experimenter ratings, to capture interaction performance. We observed a trend towards lower preference ratings in the Chinese sample. Further, interaction performance differed across the user groups, with self-reported preference not consistently aligning with observed performance. This dissociation accentuates the importance of integrating both measures in user studies. By employing benchmark data, we provide insights into varied market-based perspectives on automotive HMIs. The findings highlight the necessity for a nuanced approach to HMI design that considers diverse user preferences and interaction patterns.
Amidst rapid motorization, the surge in serious traffic accidents has raised concerns about the significant contribution of fatigued driving to road safety. However, the current vehicle-machine interface for fatigue driving reminder is relatively simplistic and plays a weak role. This study aims to optimize the functionality of traditional in-vehicle HMIs by exploring the key factors of human-computer interaction (HCI) and developing targeted user interfaces to effectively alert and reduce driver fatigue. A quantitative analysis based on previous experimental data is conducted to model the correlation between interface design factors (such as simplicity and feedback clarity) and physical fatigue parameters. An integrated user interface with fatigue alerts, rest area navigation, driver assistance, air conditioning settings, and voice control modules is proposed. Compared to the traditional interface, the improved user interface is evaluated in simulated driving conditions using an A/B experiment. The new user interface is expected to demonstrate improved effectiveness in relieving driver fatigue by providing clear visual, audio and haptic feedback. This research contributes a structured methodology for applying HCI principles to optimize in-vehicle interface design for mitigating driver fatigue, providing a framework to inform future interface development and enhance road safety.
The advancement of Intelligent vehicle is leading to a growing prevalence and importance of automotive interactive interfaces, attracting considerable research focus. With the increasing trend of global aging, the number of elderly drivers is on the rise. Research on the search performance of the automobile interactive interface for elderly users has yet to be carried out. Based on this, this research adopts eye tracking technology to explore the effects of varying icon colors on the preferences and perceptions of elderly drivers within car-human interfaces, employing eye-tracking technology to gauge their impact. This work aims to improve driving experience by enhancing drivers’ information processing capabilities and interaction comfort with in-car interfaces. In this study, we examined six distinct foreground colors against two background colors in icon designs, conducting eye-tracking experiments in standard indoor lighting. The analysis results show that elderly drivers have faster search speeds for orange icons and the slowest search speeds for yellow icons. Additionally, the variation in search times for different icon colors is more pronounced on a white background. These conclusions hold significant implications for future automotive interface designers. They can leverage these results to optimize the background and icon designs of interactive interfaces, thereby enhancing drivers’ safety and driving experience, and contributing efforts to the transportation industry.
The impact of drowsiness on in-vehicle human-machine interaction with head-up and head-down displays
No abstract available
No abstract available
No abstract available
As the automotive industry evolves towards the trend of intelligentisationhas evolved towards intelligentization, internet connectivity and electrification, the intelligent car cockpit is alsohas become the starting point for major manufacturers to compete. However, the current automotive cockpit design in China is based mainly on unimodal voice interactions, with slow interaction efficiency, poor experience and other problems occurring frequently. Based on the main differences between conventional cockpits and intelligent cockpits, this paper proposes that human-computer interaction is the core technology of intelligent cockpits. This article analyses the current development status, theoretical foundation and research significance, discusses the application potential of multimodal human-computer interaction technology in intelligent cockpits, builds an integrated interaction theoretical model of ‘human-vehicle-road’ by combining multimodal human-computer interaction technology, and elaborates the interaction potential of natural interaction modes such as voice, tactile, gesture and smell in intelligent cockpits through a theoretical model. Through this theoretical model, the specific applications of voice, touch, gesture, smell and other natural interaction modes in intelligent cockpit interaction are described, and the advantages of the synergistic development of humans, vehicles and roads are demonstrated via multimodal human–computer interaction technology. It also puts forward corresponding countermeasures to the existing problems. This paper concludes that an appropriate and good all-round human‒computer interaction technology can optimise the human‒computer interaction mode of an intelligent cockpit, thus effectively improving the safety of the intelligent cockpit and the comfort of the user's driving. It aims to promote the development of multimodal human‒computer interaction technology in the field of intelligent cockpits. This study provides a theoretical reference and methodological guidance for the future enhancement of smart cockpit design.
It is difficult to meet the high real-time and multimodal interaction requirements when the deep integration of voice interaction in smart cockpits brings about the triple security threats of end-to-cloud data transmission hijacking, illegal access to local storage, and malicious calls to service interfaces. This study proposes a scenario-driven dynamic security protection framework (SDDSF): first, a multi-dimensional scenario perception model is constructed to identify service scenarios; then, a layered dynamic encryption engine is designed; at the same time, a voice stream fingerprint watermarking technology is developed to embed time-space stamp hashes for traceability and anti-replay; finally, a security policy linkage mechanism is established to coordinate the keys and policies in the cockpit domain. Experiments show that SDDSF achieves an end-to-end transmission delay of 175.3ms with a data volume of 500KB and a replay attack interception rate of 96.2%. Ablation experiments further demonstrate that removing the voice watermark module reduces the interception rate to 80.8% (a decrease of 15.4pp). These results demonstrate that SDDSF combines excellent real-time performance and module collaboration efficiency while ensuring security, providing a scalable and practical security solution for voice- driven intelligent cockpit systems.
The automotive smart cockpit is an intelligent and connected in-vehicle consumer electronics product. It can provide a safe, efficient, comfortable, and enjoyable human-machine interaction experience. Emotion recognition technology can help the smart cockpit better understand the driver’s needs and state, improve the driving experience, and enhance safety. Currently, driver emotion recognition faces some challenges, such as low accuracy and high latency. In this paper, we propose a multimodal driver emotion recognition model. To our best knowledge, it is the first time to improve the accuracy of driver emotion recognition by using facial video and driving behavior (including brake pedal force, vehicle Y-axis position and Z-axis position) as inputs and employing a multi-task training approach. For verification, the proposed scheme is compared with some mainstream state-of-the-art methods on the publicly available multimodal driver emotion dataset PPB-Emo.
Nowadays, every car maker is thinking about the future of mobility. Electric vehicles, autonomous vehicles and sharing vehicles are one of the most promising opportunities. The lack of authority in autonomous and sharing vehicles raises different issues from which one of the main issues is passenger safety. To ensure it, new systems able to understand interactions and possible conflicts between passengers have to be designed. They should be able to predict critical situations in the car cockpit, and alert remote controllers to act accordingly. In order to better understand the features of these insecure situations, we recorded an audio-video dataset in real vehicle context. Twenty-two participants playing three different scenarios (“curious”, “argued refusal” and “not argued refusal”) of interactions between a driver and a passenger were recorded. We propose a deep learning model to identify conflict situations in a car cockpit. Our approach achieves a balanced accuracy of 81%. Practically, we highlight the importance that combining multimodality namely video, audio and text as well as temporality are the keys to perform such accurate predictions in scenario recognitinn.
No abstract available
As the intelligent cockpit evolves towards a human-machine emotional interaction center, driver emotion recognition has become a key task for enhancing active safety and interaction experience. This paper systematically studies the applicability and optimization paths of multimodal fusion emotion recognition methods based on deep learning in the intelligent cockpit environment. The research findings show that the visual methods (such as YOLO, MobileNetV3) have the advantages of non-intrusiveness and high realtime performance (response time < 40 ms), but are susceptible to changes in lighting and facial occlusion, and have insufficient recognition for high-risk emotions (such as anger); the physiological signals (electroencephalogram, electrooculogram) have strong objectivity, but have high equipment costs, poor wearing comfort, and limited generalization and robustness due to individual differences; the speech method (combined with psychological acoustic model) has the characteristics of natural interaction and resistance to expression disguise, but is greatly affected by in-vehicle noise and has decreased recognition stability. To address the above problems, this paper proposes improvement strategies such as lightweight network structure, sample enhancement for high-risk emotions, and hardware-algorithm collaborative anti-interference, which significantly improve the model’s adaptability and real-time performance in extreme environments. The research shows that a single modality is difficult to meet the complex requirements of the intelligent cockpit, and in the future, multi-modal deep fusion should be adopted to achieve the coordinated optimization of accuracy, robustness, and real-time performance while ensuring user experience, providing key support for building an integrated intelligent cockpit emotion computing framework of perception - understanding - intervention.
: With the evolution of intelligent cockpit technology and human-vehicle interaction systems, the ability to recognize and regulate emotions in vehicles has increasingly become a key research direction in intelligent driving. Existing studies mostly focus on the perception and intervention of the emotional state of the main driver. However, in actual driving scenarios, the co-driver, as an important interactive subject, has a significant interrelationship with the emotional state of the main driver, which may have a profound impact on driving behavior and driving safety. In response to this relatively weak research area, this paper reviews the latest progress in the recognition of the emotional states of the main and co-drivers, and then focuses on the transmission mechanism and behavioral impact path of the emotional linkage between them. Finally, based on the analysis of the challenges and gaps in current research, the research trends in the future in the directions of linkage modeling, multimodal fusion, and human factor-adaptive interaction are discussed, aiming to provide a theoretical basis and practical reference for building a more intelligent and collaborative emotional perception human-vehicle interaction system.
No abstract available
Humans’ are multimodal beings that perceive their environment with all their senses, which allows them to create situational awareness (van Laack, 2014). For driver vehicle interaction the idea of multimodality has been neglected in the past and displays became the most common information output in a cockpit. To handle the significant increase of traffic complexity and information availability inside the cockpit, a multimodal combination of immersive audio and visual representation can lead to a better user experience and a more intuitive human machine interaction (HMI). This paper introduces object based audio, an innovative technology to enable immersive audio HMI. The presented research suggests that object based audio can enhance the vehicle HMI by offering situational context and directional elements to the driver.
Recent advancements in large language models (LLMs) and multimodal speech-text models have laid the groundwork for seamless voice interactions, enabling real-time, natural, and human-like conversations. Previous models for voice interactions are categorized as native and aligned. Native models integrate speech and text processing in one framework but struggle with issues like differing sequence lengths and insufficient pre-training. Aligned models maintain text LLM capabilities but are often limited by small datasets and a narrow focus on speech tasks. In this work, we introduce MinMo, a Multimodal Large Language Model with approximately 8B parameters for seamless voice interaction. We address the main limitations of prior aligned multimodal models. We train MinMo through multiple stages of speech-to-text alignment, text-to-speech alignment, speech-to-speech alignment, and duplex interaction alignment, on 1.4 million hours of diverse speech data and a broad range of speech tasks. After the multi-stage training, MinMo achieves state-of-the-art performance across various benchmarks for voice comprehension and generation while maintaining the capabilities of text LLMs, and also facilitates full-duplex conversation, that is, simultaneous two-way communication between the user and the system. Moreover, we propose a novel and simple voice decoder that outperforms prior models in voice generation. The enhanced instruction-following capabilities of MinMo supports controlling speech generation based on user instructions, with various nuances including emotions, dialects, and speaking rates, and mimicking specific voices. For MinMo, the speech-to-text latency is approximately 100ms, full-duplex latency is approximately 600ms in theory and 800ms in practice. The MinMo project web page is https://funaudiollm.github.io/minmo, and the code and models will be released soon.
No abstract available
Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated. Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks. The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS.
The efficacy of interface design for manned submersibles operating at full ocean depth extends beyond the subjective impressions of the designers; it is imperative that the layout is user-friendly to address the physiological and psychological needs of submariners engaged in prolonged activities within confined spaces. This directly influences their performance efficiency and the safety of deep-sea exploration tasks. In this study, a Philips BDM4037UW 40-inch curved monitor was used to replicate the cockpit's display and control interface, segmented into seven zones. The E-prime software facilitated the presentation of visual stimuli. Subjects situated at the mock-up interface were presented with visual stimuli and required to respond by keypress. Each zone randomly displayed a single-digit number ranging from 0-9, prompting participants to discern whether the number was odd or even by pressing the "F" and "J" keys accordingly. The display utilized three background color groups—white, black, and gray, and five foreground color groups for the numbers—white, black, red, green, and blue. Behavioral metrics, such as accuracy and reaction time, were logged via the software's backend. Eye-tracking data were also collected, encompassing fixation details (visual intake count, visual intake frequency, visual intake duration total and visual intake duration average), saccade details (saccade count, saccade frequency, saccade duration average and saccade velocity average), and blink details (blink count, blink frequency, blink duration total and blink duration average), serving as evaluative indicators of the interface's human-computer interaction. Findings suggest that visual information processing is more efficient with specific color pairings: a red foreground on white or gray backgrounds and a green foreground on a black background. For optimal usability, it is recommended that key, important, and secondary information be strategically positioned within different areas of the display and control interface for deep-sea operations.
Development of cockpit display system for automobile has a tendency to large-screen and multi-screen. More and more functions are being incorporated into one interface which increase the complexity of human-machine interaction. Designing a wrist support device for the driver can not only improve the interaction efficiency, but also improve the interaction quality. This paper studies the performance of in-vehicle interaction with and without wrist support. Measurements in timing and quality aspects were adopted to evaluate the interaction process. Although it is a static test, the results still indicated improvements in interaction performance compared to condition of no wrist support. Wrist support can improve the driver's control accuracy of fingertip movement, which was reflected by faster finger velocity and shorter task completion time. In addition, wrist support could reduce the muscle load resulting in better user experience and lower musculoskeletal health risk. Dramatically change on cockpit is not necessary for manufacturers, optimisation of the shift lever or armrest box could provide wrist support for the driver. These results could be used as reference for automobile manufacturers.
No abstract available
No abstract available
No abstract available
No abstract available
In the rapidly evolving landscape of automotive infotainment, providing a robust, modular, and easily extensible architecture is paramount. This article presents a plugin manager approach for multi-brand, multi-screen navigation— aimed at automotive software built on top of Android and its Jetpack (including Compose) toolchain. As automotive OEMs increasingly demand brand-specific user experiences, developers often struggle with proliferating “if-else” conditionals, duplicated code, and tangled navigation logic. Traditional solutions, such as static route-based frameworks or theming engines, tend to buckle under the complexity of dynamic brand overrides. Meanwhile, adopting monolithic plugin architectures like OSGi or Eclipse RCP can be excessive and poorly tailored to Android’s modern ecosystem. To address these challenges, we propose a centralized plugin manager that orchestrates brand-specific screens via discreet plugin modules. Each plugin encapsulates the unique UI and navigation flow required by a given brand, whether it’s Volkswagen, Audi, or newer entrants to the market. At runtime, the plugin manager intercepts navigation requests, identifies the appropriate brand, and dynamically dispatches the user to the correct composable screen. This architecture not only curtails code duplication but also simplifies the on-ramp for new brand introductions: engineers simply drop-in new plugin classes—optionally annotated for automated registration using Kotlin Symbol Processing—without editing extensive branching logic. Our approach draws inspiration from well-known software patterns like Factory Pattern for the creation and retrieval of brand- specific plugin instances, Strategy Pattern for encapsulating brand-driven behaviors under a uniform BasePlugin interface and Annotation-Driven Patterns (e.g., KSP) for compile-time discovery and streamlined registration of these plugins. We also compare the plugin manager solution to alternative navigation techniques like Multi-module Gradle projects that manually swap resources per brand, Reflection-based override approaches prone to runtime overhead and poor type safety, and and Pure theming solutions that lack the flexibility to alter entire UI flows. The plugin manager approach offers a cleaner, more scalable middle ground—particularly relevant to the 90% of automotive stacks running on Android, where Jetpack Compose and Kotlin are increasingly becoming the de facto standards for creating intuitive, high-performance in-vehicle experiences. In short, this article offers actionable guidance for software architects and developers wrestling with the demands of multi-brand automotive infotainment. By marrying proven design patterns with Android’s latest technologies, the plugin manager framework facilitates rapid expansion, reduces maintenance overhead, and empowers OEMs to elevate brand identity without sacrificing software maintainability. Through prototypes and real-world scenarios, we illustrate how this architecture effectively integrates into large-scale automotive programs, aligning with broader trends in modular software design and responding to the complexities of an ever-more diversified mobility marketplace. Keywords—automotive OEMs, OSGi, Eclipse RCP, KSP, Annotation-Driven Patterns, Jetpack compose, Factory and Strategy patterns, Reflection Override, Plugins Manager.
No abstract available
We describe ultra‐high sensitive force sensor on flat display that can detect and differentiate between feather touch and press touch or tapping. The proposed unique sensor pattern design and pressure‐sensitive material enables ultra‐high force sensitivity, multi force detection (10 points). Our force touch technology with button‐shaped bumps cover enables blind input operations and usability as good as mechanical buttons. Therefore, our technology provides reduced input errors and yield more intuitive and safedrive. In addition, the proposed novel algorithm enables an operating temperature range for automotive applications.
This paper analyzes the requirements of improving the efficiency of human-machine interaction in civil aircraft cockpit, identifies the disadvantages of human-machine interaction efficiency and cognition in aircraft cockpit that has used touch screen, and analyzes the interactive characteristics of touch screen in operation. Based on the concept of safety, efficient and comfortable control, this paper expounds the development demand and trend of touch control of civil aircraft cockpit in future. Finally, for the purpose of ensuring control safety and improving control efficiency, an integrated design of touch control and information display based on the synoptic page is proposed, so as to realize the requirements of efficient cognition and convenient control, improve the touch usability, reduce the workload and human errors for pilot.
No abstract available
No abstract available
No abstract available
This article presents multi-touch readout IC embedding finger-resistance extraction (FRE) on a capacitive touch screen panel (TSP). The proposed FRE mode is aimed to have the ability to tell users apart while sensing touch input by utilizing a unique finger’s resistance due to different bio-electrical properties. The resonance-driven FRE technique and a clamped zoom-in integrator are exploited to obtain a wide dynamic range (DR). A switched-capacitor current-controlled oscillator-based 14-bit analog-to-digital converter (ADC) was also designed, which has the benefits of low noise, compact size, and high flexibility of its resolution. The prototype chip fabricated in 0.18- $\mu \text{m}$ CMOS achieved the measured signal-to-noise ratio (SNR) of 37.5 dB and DR of 50.7 dB in touch sensing and FRE modes, respectively, in a real-6.7-in capacitive TSP. By applying support vector machine learning to the FRE data, five different user-fingers were successfully classified with 97.7% accuracy after 500 learning cycles and thus demonstrating the feasibility of reliable user-differentiation on multi-user collaborative touch interfaces.
A recent trend in the global mobile/IoT industry is the emergence of next-generation smart devices with various screens, thus mobile/IoT market leaders are highly focused on building a new multi-device computing ecosystem based on such new smart devices. Market leaders are not only simply varying their screen sizes, but also competitively launching new devices equipped with innovative screens like foldable and dual-screen phones. However, the current mobile computing ecosystem is restricted by the single device paradigm that allows a user to interact with only one screen tethered to a single device, limiting the potential that the emerging multi-device computing trend provides.
No abstract available
No abstract available
The Deep Orange program immerses automotive engineering students into the world of an OEM as part of their graduate education. While developing the program’s seventh vehicle concept, students explored new human machine interface concepts with the goal of having a clean, minimal interior design. One outcome of their ideation process is a concept for a holographic companion that can act as a concierge for all functions of the vehicle. After creating a prototype of the holographic display using existing technologies and developing a user interface controlled by hand gestures, a usability study was performed with older adults. The results suggest the system was not intuitive. Participants demonstrated better performance with tasks using discrete hand motions in comparison to those that required continuous motions. The data were helpful to understand the challenges of untrained users interacting with a new HMI system.
No abstract available
The paper explores the use of finger gestures as a means of human-machine interaction (HMI), with an emphasis on the ergonomics and usability of such systems. A parametric methodology was developed to create a gesture lexicon based on six critical criteria: comfort, intuitiveness, recognition accuracy, fatigue, ease of learning, and memorability. The process includes modeling the relationship between mental intention and the visual representation of the gesture, as well as employing HMM and DTW algorithms for real-time recognition. Initial experimental results showed a 12 % improvement in recognition accuracy compared to conventional methods, a reduction in subjective user fatigue of up to 30 % based on the Borg scale, high memorability rates (>85%) after a 7-day non-use period, and intuitiveness $>80 \%$ on first attempts-indicating the naturalness of gesture use. The findings demonstrate that finger gestures, when designed according to ergonomic principles, can serve as an efficient and natural interaction medium in applications such as robotics, augmented reality, and accessible systems. This work lays the foundation for the development of intelligent interfaces with optimized user experience and technological maturity.
Automation transparency offers a promising way for users to understand the uncertainty of automated driving systems (ADS) and to calibrate their trust in them. However, not all levels of information may be necessary to achieve transparency. In this study, we conceptualized the transparency of the automotive human–machine interfaces (HMIs) in three levels, using driving scenarios comprised of two degrees of urgency to evaluate drivers’ trust and reliance on a highly automated driving system. The dependent measures included non-driving related task (NDRT) performance and visual attention, before and after viewing the interface, along with the drivers’ takeover performance, subjective trust, and workload. The results of the simulated experiment indicated that participants interacting with an SAT level 1 + 3 (system’s action and projection) and level 1 + 2 + 3 (system’s action, reasoning, and projection) HMI trusted and relied on the ADS more than did those using the baseline SAT level 1 (system’s action) HMI. The low-urgency scenario was associated with higher trust and reliance, and the drivers’ visual attention and NDRT performance improved after viewing the HMI, but not statistically significantly. The findings verified the positive role of the SAT model regarding human trust in the ADS, especially in regards to projection information in time-sensitive situations, and these results have implications for the design of automotive HMIs based on the SAT model to facilitate the human–ADS relationship.
User-centered design (UCD) methods for human-machine interfaces (HMI) have been a key to develop safe and user-friendly interaction for years. Especially in safety-critical domains like transportation, humans need to have clear instructions and feedback loops to safely interact with the vehicle. With the shift towards more automation on the streets, human-machine interaction needs to be predictable to ensure safe road interaction. Understanding human behavior and prior user needs in crucial situation can be significant in a multitude of complex interactions for in-vehicle passengers, pedestrians and other traffic participants.While research mostly focused on addressing user behavior and user needs, the inclusion of users has often been limited to study participants with behavioral inputs or interviewees prompted for opinions. Although users do not have the knowledge and experience as professional designers and experts to create a product for others alone, unbiased insights into the future target groups’ mental models are a valuable and necessary asset. Hence, with stronger user participation and appropriate tools for users to design prototypes, the design process may deeper involve all type of stakeholders helping to provide insights into their mental models to understand user need and expectation.To extend current UCD practices in the development of automotive HMIs, our work introduces a user-interactive approach, based on the principles of participatory design (PD), to enable users to actively create and work within design process. A within-subject study was conducted based on evaluating users’ trust within an interaction with an AV and subsequently configuring the corresponding HMI. The scenario focuses on the interaction between a pedestrian (user’s point of view) deciding to cross path with an automated vehicle (AV, SAE L4). The AV would show its intention via a 360° light band HMI on its roof. The interactive simulation offered users hands-on options to iteratively experience, evaluate and improve HMI elements within changeable environmental settings (i.e., weather, daytime) until they were satisfied with the result. The addition of participation was provided by an interface using common visual user interface elements, i.e. sliders and buttons, giving users a range of variety for real-time HMI configuring.A first prototype of this interactive simulation was tested for the safety-critical use-case in a usability study (N=29). Results from questionnaires and interviews show high usability acceptance of the interactive simulation among participants as assessed by the system usability scale. Overall usability was rated high (System Usability Scale) and frustration low (NASA-TLX raw). Moreover, the interactive simulation was rated to have above average user experience (User Experience Questionnaire). Appended feedback interviews gave valuable insights on improving the simulation user interface, offering different design opportunities within the simulation and a wider parameter space. The short design session time shows the limit of customizability options within this study but needs to be further investigated to determine optimal range for longer evaluation and design sessions. Based on the study results, further requirements for PD simulative environments to assess limits for parameter spaces in virtual environments are derived.
No abstract available
No abstract available
In response to the current airborne mission system human machine interface (HMI) for the first time with the touch interface, an ergonomics experiment was carried out based on the technology of multi-dimensional and multi-index joint ergonomics evaluation. The purpose of this paper was to overcome the shortcomings in the design of the human machine interface of the airborne mission system by the new interface design method in line with the current software and hardware developing trends. And, the evaluation methods and indicators including performance, subjective evaluation, and eye tracking were comprehensively used. The evaluation of the operator’s workload under different interfaces and tasks was realized. Meanwhile, a certain reference for the theory and engineering application of the aircraft human machine interface optimization was provided.
Automated driving research as a key topic in the automotive industry is currently undergoing change. Research is shifting from unexpected and time-critical take-over situations to human machine interface (HMI) design for predictable transitions. Furthermore, new applications like automated city driving are getting more attention and the ability to engage in non-driving related activities (NDRA) starting from SAE Level 3 automation poses new questions to HMI design. Moreover, future introduction scenarios and automated capabilities are still unclear. Thus, we designed, executed, and assessed a driving simulator study focusing on the effect of different transition frequencies and a predictive HMI while freely engaging in naturalistic NDRA. In the study with 33 participants, we found transition frequency to have effects on workload and acceptance, as well as a small impact on the usability evaluation of the system. Trust, however, was not affected. The predictive HMI was used and accepted, as can be seen by eye-tracking data and the post-study questionnaire, but could not mitigate the above-mentioned negative effects induced by transition frequency. Most attractive activities were window gazing, chatting, phone use, and reading magazines. Descriptively, window gazing and chatting gained attractiveness when interrupted more often, while reading magazines and playing games were negatively affected by transition rate.
Touchscreen Human-Machine Interfaces (HMIs) are a well-established and popular choice to provide the primary control interface between driver and vehicle, yet inherently demand some visual attention. Employing a secondary device with the touchscreen may reduce the demand but there is some debate about which device is most suitable, with current manufacturers favouring different solutions and applying these internationally. We present an empirical driving simulator study, conducted in the UK and China, in which 48 participants undertook typical in-vehicle tasks utilising either a touchscreen, rotary-controller, steering-wheel-controls or touchpad. In both the UK and China, the touchscreen was the most preferred/least demanding to use, and the touchpad least preferred/most demanding, whereas the rotary-controller was generally favoured by UK drivers and steering-wheel-controls were more popular in China. Chinese drivers were more excited by the novelty of the technology, and spent more time attending to the devices while driving, leading to an increase in off-road glance time and a corresponding detriment to vehicle control. Even so, Chinese drivers rated devices as easier-to-use while driving, and felt that they interfered less with their driving performance, compared to their UK counterparts. Results suggest that the most effective solution (to maximise performance/acceptance, while minimising visual demand) is to maintain the touchscreen as the primary control interface (e.g. for top-level tasks), and supplement this with a secondary device that is only enabled for certain actions; moreover, different devices may be employed in different cultural markets. Further work is required to explore these recommendations in greater depth (e.g. during extended or real-world testing), and to validate the findings and approach in other cultural contexts.
No abstract available
With the development of autonomous technology, the research into multimodal human-machine interaction (HMI) for autonomous vehicles (AVs) has attracted extensive attention, especially in automotive wellness. To support the design of HMIs for automotive wellness in AVs, this paper proposes a multimodal design framework. First, three elements of the framework were envisioned based on the typical composition of an interactive system. Second, a five-step process for utilizing the proposed framework was suggested. Third, the framework was applied in a design education course for exemplification. Finally, the AttrakDiff questionnaire was used to evaluate these interactive prototypes with 20 participants who had an affinity for HMI design. The questionnaire responses showed that the overall impression was positive and this framework can help design students to effectively identify research gaps and expand design concepts in a systematic way. The proposed framework offers a design approach for the development of multimodal HMIs for autonomous wellness in AVs.
No abstract available
No abstract available
No abstract available
Culture impacts the perception of a product and the perception in turn affects its evaluation. Research already revealed that usability ratings for the same product differ between countries Therefore, the underlying reasons how culture and usability may be mapped should be further investigated. Gaining deeper understanding of this connection might lead to a better comprehension why same products are rated differently in different markets. The aim of this contribution is to map the six cultural dimensions by Hofstede to the seven usability criteria defined by ISO 9241-210. Based on theoretical considerations, implications for culturally adapted HMIs are derived and an experimental approach for future research is presented.
No abstract available
Predictive touch is an emerging HMI technology that can significantly improve the usability and performance of in-vehicle displays [1-4]. It relies on predicting, early in the pointing gesture, the interface item the driver or passenger intends to select on the display and simplifies the selection task. The user need not touch the display as the system can autonomously auto-select the predicted interface component. This video shows a prototype of a predictive touch system operating in real-time, in a laboratory and vehicle environment. It also depicts the prediction results as calculated by the system whilst pointing in a moving car.
No abstract available
No abstract available
In-car interfaces are the primary medium for communication between the occupants and the increasingly agentic vehicle systems. Although many universities teach automotive user experience and design courses, there is no consensus on what topics to cover. Some schools may choose to focus on the interior design of the cabin, including, but not limited to, physical controls and ergonomics, while other schools may just focus on the usability of what is shown to the driver and passengers. Participants in our workshop will discuss various topics for teaching Automotive UX and UI at both undergraduate and graduate levels, participating in interactive activities such as panels, breakout discussions, and syllabus design. Participants will then combine and form their findings into a course outline based on themes (ex., UI, Human Factors, etc.). This workshop is expected to achieve general consensus on a Automotive UX curriculum drawing from diverse stakeholders, including academia, industry, and government.
No abstract available
Abstract Data-driven design is believed to be empowered by machine learning (ML) with advanced pattern classification and prediction. However, research on how ML can be used to support automotive human-machine interface (HMI) design is lacking. We presented a case study of truck HMI design to understand the current data use and expectations of ML in the design process. Findings show decentralized data practices, the role of expertise in decision-making, and the envisioned reactive use of ML, where we underscore the implications for advancing human-ML collaboration in designing future truck HMI systems.
In order to enhance driving safety and identify potential hazards, next-generation intelligent vehicles will need to understand human drivers’ intentions and predict their potential maneuvers correctly. In a lane-change scenario, a driver’s head rotation measured by the in-cabin driver monitoring camera can serve as a reliable indicator to predict his/her intention. However, using a general model to predict each driver’s maneuver is not accurate, while directly sharing the personalized monitoring data to other intelligent vehicles raises the privacy concern. In this paper, we propose a clustering-based personalized federated learning framework (CPFL) to predict lane-change maneuver based on driver monitoring data. Personalization is added on top of the traditional federated learning (FL) through clustering, which separates and groups similar driving behaviors based on clustering parameters: head position threshold and average pre-lane-change preparation time. Long-Short Term Memory (LSTM) networks with different sequence lengths are deployed to predict lane changes in different clusters based on the lane-change preparation time. CPFL framework is trained and tested using the data collected from several human drivers under different driving scenarios through the Unity simulation platform. According to the results, CPFL’s average training efficiency is 7.6 times higher than the classic FedAvg approach, and CPFL also offers better adaptability to different driving behaviors than FedAvg with 4% higher accuracy, 0.2% fewer false positives, and 27.8% fewer false negatives.
: The application of human-computer interaction in intelligent vehicles has become quite mature , which improved the performance of the vehicle. However, the use of multi-modal interactions presents redundancy, and the resulting functional stacking and conflict issues urgently need to be resolved. Therefore, the fusion strategy of multi-modal interaction in intelligent vehicles is particularly important. In order to further upgrade intelligent vehicles, this study analyzed the limitations of current interaction methods on deficiencies from the perspectives of vehicle environment perception ability and cabin experience. Then, starting from augmented reality, the research integrates the interaction modes of various modalities through a car mounted AR system, alleviating the limitations of single modal interaction and the problems caused by rigid fusion of multi-modal interaction. This can significantly enhance the safety, convenience, and intelligence of vehicles. This paper aims to provide theoretical support and practical ideas for the multi-modal interaction fusion of intelligent vehicle systems, and to provide a reference for more efficient and user-friendly human-computer interaction design in the future.
In recent years, the development of intelligent cabins has been rapid, but testing and evaluation usually involves detecting a single component, lacking a quantitative evaluation method for the overall performance of intelligent cabins. To address this issue, firstly, study the visual interaction elements inside the cabin and summarize the visual interaction testing methods; Then, research auditory interaction testing methods and establish a noise library to achieve reproducible experiments on common external environments; Finally, establish a multi-dimensional comprehensive evaluation method for the intelligent cabin to achieve a comprehensive evaluation of the overall performance of the intelligent cabin.
With the large-scale commercialization of 5G, the global industry has started the exploration of the next generation mobile communication technology (6G). From mobile Internet, to IoT, and then to the smart connection of everything, 6G will transform from 5G’s service objects of people and things to the intelligent networking of agent that supports human–machine–object. 6G networks should have the characteristics of ubiquitous intelligence and ubiquitous perception, which poses challenges for 6G network construction. Therefore, we propose a 6G Semantic Communication Scheme based on Intelligent Fabrics for transportation in-cabin scenarios (6GSCS-IF), which can provide senseless intelligent interaction in transportation in-cabin environment through widely and flexibly deployed intelligent fabrics, demonstrating the superiority of intelligent fabrics in realizing human–machine–object intelligent sensory interaction. Then, we propose a Deep Learning-based Semantic Communication Model for Time-series data (DL-SCMT), and use deep learning for semantic sensing and information extraction to build an end-to-end semantic communication system. The experimental results show that the semantic communication services provided by this model can achieve better signal reconstruction and higher-order intelligent services compared with traditional communication methods.
This paper is intended to provide an overview of a literature survey conducted to explore the level of control technology in the space of Heating, Ventilation and Air Conditioning (HVAC). The survey aims to determine if there is room for improvement to HVAC control and prediction. While the survey reviewed HVAC control in different environments such as building and transportation vehicles, the scope of the survey is aimed to address HVAC control in vehicles. Efficiency has become very critical as the automotive industry takes the path of electric vehicles and therefore many types of approaches to control HVAC in an effort to increase energy consumption efficiency while maintaining consumer comfortability. A proposal to use both LDA and Kalman Decomposition to provide comfort and efficiency has not been found to be implemented and would be a good study to explore. Learnings and prediction will be further evaluated for opportunities of new innovations and implementation.
Autonomous vehicles (AVs), driven by state‐of‐the‐art deep learning and computer vision technologies, can revolutionize current mobility systems in modern transportation. Driverless AVs are slowly integrated into public transportation with significant advantages for the passengers and public transport operators. However, passenger safety and comfort are two of the main challenges that need to be addressed. This work presents a complete in‐cabin monitoring framework with a suite of services, employing deep learning algorithms using a variety of onboard sensors at the edge. This proposed framework offers various innovative services aimed at enhancing security, monitoring passenger presence, accommodating diverse needs, and personalizing the passengers' travel experience, while also reducing the workload of human safety officers. Experimental results demonstrate the framework's effectiveness in identifying abnormal events with a high accuracy, employing multiple datasets and custom in‐cabin scenarios. Additionally, the system effectively conducts automated passenger counting and facial identification, ensuring real‐time responsiveness under diverse operational conditions. Overall, the novelty of this work lies in the framework's multimodal approach, integrating visual and audio analysis, to achieve robust performance across various scenarios, significantly contributing to the advancement of autonomous driving technologies.
No abstract available
Conventional in-vehicle safety systems often neglect real-time emotional monitoring, prediction, and passenger influence, leading to reactive rather than proactive interventions. This paper presents an emotion-aware cooperative driving system that combines facial emotion recognition, time-series emotion forecasting, and personalized music-based regulation. The system detects both driver and front-seat passenger emotions through a Vision Transformer (ViT) model, while a time-series model anticipates the driver's upcoming emotional state. A prioritization algorithm ensures driver emotions hold precedence, with passenger states considered when the driver is stable. Based on this prioritized emotional context, the system regulates the in-cabin atmosphere using music recommendations drawn from the driver's preferred artists via the Spotify API. Experimental results show robust realtime emotion classification (86.4 % validation accuracy), proactive forecasting (81.6 % predictive accuracy), and improved driver acceptance due to personalization. The proposed framework advances intelligent transportation by shifting from static monitoring toward predictive, human-centered, and non-intrusive emotional regulation, thereby enhancing both safety and user comfort.
At the Human-AI Interaction group at the University of British Columbia, we investigate how to support Human-AI collaboration via AI artifacts that can understand relevant properties of their users (e.g., states, skills, needs) and personalize the interaction accordingly in a manner that preserves transparency, user control and trust. In this talk, I will illustrate examples of our research in AI-drive personalization spanning areas such as User Adaptive Visualizations, intelligent Tutoring Systems, and Personalized Explainable AI.
This paper explores optimising in-cabin communication and user experience in future vehicles through dynamic, expressive interior lighting. Traditionally, vehicle interior lighting has been static and functional, but recent advancements have shifted towards mood-enhancing lighting. With AI technologies becoming integral to vehicles, sophisticated and intelligent interaction experiences are essential. This study uses the Volkswagen ID series' dynamic lighting feature to examine dynamic mood lighting as a tool for intuitive, non-intrusive communication. We assess the effectiveness and limitations of various lighting sequences through quantitative and qualitative methods. Our findings reveal key insights into user preferences and challenges, providing a framework for designing adaptive interior lighting systems. The results highlight the importance of user-centred design in enhancing driving experiences and offer directions for future research in automotive human-computer interaction.
With the advancement of automotive intelligence, the acoustic environment within the cabin has become a critical dimension of user experience. This paper addresses the evaluation needs of intelligent cabin audio systems by proposing a comprehensive testing and assessment methodology based on user experience. Initially, the functional and performance metrics of in-vehicle intelligent audio systems are analyzed to define evaluation dimensions, including wake-up capability, interaction performance, distortion rate, sound quality, and active noise cancellation efficiency, with corresponding testing protocols designed accordingly. Subsequently, audio signals are collected under various cabin conditions (such as driver and passenger positions, urban and highway driving scenarios) to quantify system performance through both subjective and objective evaluation methods. Experimental results demonstrate that this approach effectively assesses the performance of intelligent audio systems across multiple metrics. By applying weighted calculations, an overall system score is derived, providing a foundation for optimizing the design of intelligent audio systems. This study bridges a gap in intelligent cabin audio testing and holds significant implications for enhancing in-vehicle acoustic quality and driving comfort.
最终分组结果全面覆盖了智能座舱HMI设计的技术栈与方法论。从底层的软硬件架构、传感器技术、自动化测试工具,到中层的多模态融合交互、视觉美学优化与UX量化评估模型,再到顶层的情感共识、个性化自适应策略以及自动驾驶时代的人机共驾安全。研究呈现出从单纯的“人机界面”向“第三空间、共情伴侣、安全防线”深度演进的趋势。