unity3d漫游游戏
基于 AI 与三维重建的自动化场景生成技术
该组文献探讨了计算机视觉与图形学的前沿技术,包括利用 3D 高斯泼溅 (3DGS)、扩散模型 (Diffusion Models) 和布局引导算法,从文本、图像或全景图自动生成高质量、一致性的三维室内外漫游场景,旨在提升建模效率与真实感。
- TiP4GEN: Text to Immersive Panorama 4D Scene Generation(Ke Xing, Hanwen Liang, Dejia Xu, Yuyang Yin, Konstantinos N. Plataniotis, Yao Zhao, Yunchao Wei, 2025, Proceedings of the 33rd ACM International Conference on Multimedia)
- Scene4U: Hierarchical Layered 3D Scene Reconstruction from Single Panoramic Image for Your Immerse Exploration(Zilong Huang, Jun He, Junyan Ye, Lihan Jiang, Weijia Li, Yiping Chen, Ting Han, 2025, 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR))
- ExScene: Free-View 3D Scene Reconstruction with Gaussian Splatting from a Single Image(Tianyi Gong, Boyan Li, Yifei Zhong, Fangxin Wang, 2025, 2025 IEEE International Conference on Multimedia and Expo (ICME))
- DreamCraft: Interactive 3D Scene Creation from Editable Panorama in Virtual Reality(Cheng-Chih Tsai, Tse-Yu Pan, 2025, Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Posters)
- Soundscape-to-panorama: spatialize auditory perception by linking acoustic environment to panorama(Yonggai Zhuang, Yanni Gui, Teng Fei, 2025, International Journal of Digital Earth)
- Pano2Scene: 3D Indoor Semantic Scene Reconstruction from a Single Indoor Panorama Image(Wei Zeng, Sezer Karaoglu, T. Gevers, 2020, Proceedings of the British Machine Vision Conference 2020)
- DreamCube: 3D Panorama Generation via Multi-plane Synchronization(Yukun Huang, Yanning Zhou, Jianan Wang, Kaiyi Huang, Xihui Liu, 2025, ArXiv)
- FastScene: Text-Driven Fast 3D Indoor Scene Generation via Panoramic Gaussian Splatting(Yikun Ma, Dandan Zhan, Zhi Jin, 2024, ArXiv)
- Look Beyond: Two-Stage Scene View Generation via Panorama and Video Diffusion(Xueyang Kang, Zhengkang Xiang, Zezheng Zhang, K. Khoshelham, 2025, Proceedings of the 33rd ACM International Conference on Multimedia)
- Automatic 3D Indoor Scene Modeling from Single Panorama(Yang Yang, Shi Jin, Ruiyang Liu, S. B. Kang, Jingyi Yu, 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition)
- PSGS: Text-driven Panorama Sliding Scene Generation via Gaussian Splatting(Xin Zhang, Shen Chen, Jiale Zhou, Lei Li, 2026, ArXiv)
- LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation(Shuai Yang, Jing Tan, Mengchen Zhang, Tong Wu, Gordon Wetzstein, Ziwei Liu, Dahua Lin, 2024, Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers)
- SceneCraft: Layout-Guided 3D Scene Generation(Xiuyu Yang, Yunze Man, Jun-Kun Chen, Yu-Xiong Wang, 2024, ArXiv)
- DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion(Weicai Ye, Chenhao Ji, Zheng Chen, Junyao Gao, Xiaoshui Huang, Song-Hai Zhang, Wanli Ouyang, Tong He, Cairong Zhao, Guofeng Zhang, 2024, ArXiv)
- Indoor Scene Reconstruction: From Panorama Images to CAD Models(Chongyang Luo, Bochao Zou, Xiang-wen Lyu, Haiyong Xie, 2019, 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct))
- 3D Panorama Based Scene Construction System for Digital Culture Centers(Qi Liang, Hanxi Wang, 2021, 2021 International Conference on Culture-oriented Science & Technology (ICCST))
- Panorama scene analysis with conic projection(Y. Yagi, S. Kawato, 1990, EEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications)
虚拟仿真教学与工业职业实训系统
这类文献侧重于利用 Unity3D 开发针对特定学科(医学、机械、航海、农业、电力等)的虚拟实验室或实训平台,通过高拟真漫游解决高成本、高风险或资源匮乏的实操问题,涵盖了从校园教学到专业工业流程的模拟。
- Design and Implementation of Virtual Campus Roaming System Based on Unity3d(Yuxuan Li, Hua Luo, Yiren Zhou, 2022, Journal of Physics: Conference Series)
- Research on virtual simulation system of beef cattle segmentation based on Unity3D(He Zhu, Yichen Ren, Yanxia Xing, Xinhua Pan, Chenhao He, Dong Chen, Tao Ma, 2024, 2024 International Conference on Computers, Information Processing and Advanced Education (CIPAE))
- Research on the Application Design of Computer Virtual Reality Technology in Animation(Jiaoyan Liu, 2022, 2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA))
- Unity3D FPS TB CARE Game for Tuberculosis Awareness among Digital Native Children(Fahyu Dwi Pratiwi, Cindy Turusta, 2023, Indonesian Journal of Innovation Studies)
- Design and exploration of virtual marine ship engine room system based on Unity3D platform(Qing Zhang, Neng Chang, Kai Shang, 2019, Journal of Intelligent & Fuzzy Systems)
- 基于Unity3D的虚拟磁选实验室系统的设计与实现(禹 瑾, 鄢曙光, 吕 彤, 2024, 软件工程与应用)
- 虚拟校园漫游系统开发中的关键技术研究(陈 晴, 杨 蕾, 王 飞, 2017, 计算机科学与应用)
- Application of virtual training software based on database and Unity3D in practical installation teaching training(Xue Gao, Yongxin Liu, Le Lv, 2024, Proceedings of the 2024 2nd International Conference on Information Education and Artificial Intelligence)
- Design and Practice of Virtual Simulation Experimental Teaching in Colleges and Universities Based on Unity3D(Daigen Huang, 2024, Proceedings of the 2024 2nd International Conference on Information Education and Artificial Intelligence)
- A Virtual Learning Platform for Biomedical Laboratory Scientists Using Unity3D(Peihua Han, Guoyuan Li, S. Sunilkumar, Yanran Cao, 2023, 2023 11th International Conference on Control, Mechatronics and Automation (ICCMA))
- Experiment Teaching Design of Engine Casting Virtual Simulation Based on Unity3D(Lei Ma, Zhaoxin Yan, Junjie Xiong, Yong Huang, Junfu Zhang, Lian Zhang, 2024, Computer Applications in Engineering Education)
- Construction of Robotic Virtual Laboratory System Based on Unity3D(Mu Lin, Lijun San, Yu-jiao Ding, 2020, IOP Conference Series: Materials Science and Engineering)
- 虚拟车床加工装配仿真训练系统(刘 秀, 2025, 建模与仿真)
- 基于Unity3D的高校实验室消防演练系统(郑晓静, 唐 富, 鄢展锋, 2019, 计算机科学与应用)
- Unity3D Serious Game Engine for High Fidelity Virtual Reality Training of Remotely-Operated Vehicle Pilot(C. Chin, Nurshaqinah B. Kamsani, X. Zhong, Rongxin Cui, Chenguang Yang, 2018, 2018 10th International Conference on Modelling, Identification and Control (ICMIC))
- Virtual Ship Environment Creation Method(Zhang Rukai, 2019, Journal of Physics: Conference Series)
- VREd: A Virtual Reality-Based Classroom for Online Education Using Unity3D WebGL(Ratun Rahman, Md. Rafid Islam, 2023, ArXiv)
- 冲压成形虚拟仿真实验系统构建与关键技术研究(李 秀, 范淑媛, 廖敦明, 孙 飞, 2017, 建模与仿真)
- Simulation Design and Research of Flame Generation in Virtual Fire Scene Based on Unity3D(Li Dong, 2024, Transactions on Computer Science and Intelligent Systems Research)
- Development and Application of Virtual Assembly Training System of Mine Drilling Rig Based on Unity3D(Guoping Chen, Guoping Chen, Haiwu Ma, G. Liu, Shijie Song, 2023, Proceedings of the 2023 5th International Conference on Internet of Things, Automation and Artificial Intelligence)
- 基于Unity3D的500 kV变电站倒闸操作仿真研究(顾 捷, 2018, 计算机科学与应用)
- SimNav-XR: an extended reality platform for mobile robot simulation using ROS2 and Unity3D.(Prakash Aryan, Sujala D. Shetty, V. Kalaichelvi, R. Karthikeyan, 2026, Frontiers in robotics and AI)
- The Path Exploration of University Ideological and Political Courses Based on the Concept of Metaverse(M. Wang, Shaopeng Yu, Xiang Li, 2023, 2023 9th International Conference on Virtual Reality (ICVR))
漫游交互机制、感知优化与晕动症研究
该组关注用户在虚拟环境中的操作体验,包括输入设备对比(手势、智能手机、传统控制器)、物理反馈(触觉背心)、视觉引导缓解晕动症以及底层碰撞检测算法的优化,旨在增强沉浸感与交互流畅度。
- Virtual Exploration: Seated versus Standing(Noah Coomer, Joshua Ladd, B. Sanders, 2018, No journal)
- Unity3D-based Virtual Click Interaction Implementation(Qi Jing, S. Du, Yi Qiu, 2023, Proceedings of the 2023 15th International Conference on Computer Modeling and Simulation)
- Alternating Sword Controller in First-Person Action Game using Fuzzy Logic for Adaptive Enemy(Arby Azyumardi Azra, Ardiawan Bagus Harisa, 2024, Journal of Games, Game Art, and Gamification)
- First person movement control with palm normal and hand gesture interaction in virtual reality(Chaowanan Khundam, 2015, 2015 12th International Joint Conference on Computer Science and Software Engineering (JCSSE))
- Mid-Air Interaction vs Smartphone Control for First-Person Navigation on Large Displays: A Comparative Study(S. Vosinakis, 2018, No journal)
- Effects of Third-Person Locomotion Techniques on Sense of Embodiment in Virtual Reality(Johanna Ulrichs, Andrii Matviienko, Luis Quintero, 2024, Proceedings of the International Conference on Mobile and Ubiquitous Multimedia)
- Using Visual Guides to Reduce Virtual Reality Sickness in First-Person Shooter Games: Correlation Analysis(Kwang-Ho Seok, YeolHo Kim, Wookho Son, Yoon Sang Kim, 2020, JMIR Serious Games)
- Integrated Immersion Design in Unity3D Games: Validating Narrative, Environmental, and Interactive Cohesion(Zijia Chen, 2025, 2025 IEEE 3rd International Conference on Sensors, Electronics and Computer Engineering (ICSECE))
- Enhanced Player Interaction Using Motion Controllers for First-Person Shooting Games in Virtual Reality(P. Krompiec, Kyoungju Park, 2019, IEEE Access)
- Self-Transforming Controllers for Virtual Reality First Person Shooters(Andrey Krekhov, K. Emmerich, Philipp Bergmann, S. Cmentowski, J. Krüger, 2017, Proceedings of the Annual Symposium on Computer-Human Interaction in Play)
- A Fast Parallel Processing Algorithm for Triangle Collision Detection Based on AABB and Octree Space Slicing in Unity3D(Kunthroza Hor, Nak-Jun Sung, Jun Ma, Min-Hyung Choi, Min Hong, 2025, IEEE Access)
- Tangible Avatar : Enhancing Presence and Embodiment During Seated Virtual Experiences with a Prop-Based Controller(Justine Saint-Aubert, F. Argelaguet, Anatole Lécuyer, 2023, 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct))
- Haptic feedback in first person shooter video games(Ulrik Söderström, William Larsson, Max Lundqvist, Ole Norberg, Mattias Andersson, Thomas Mejtoft, 2022, Proceedings of the 33rd European Conference on Cognitive Ergonomics)
- On the Use of Mobile Devices as Controllers for First-Person Navigation in Public Installations(S. Vosinakis, Anna Gardeli, 2019, Inf.)
- Player performance with different input devices in virtual reality first-person shooter games(Yasin Farmani, Robert J. Teather, 2017, Proceedings of the 5th Symposium on Spatial User Interaction)
- Performance of modern gaming input devices in first-person shooter target acquisition(A. Zaranek, Bryan Ramoul, Huaming Yu, Yiyu Yao, Robert J. Teather, 2014, CHI '14 Extended Abstracts on Human Factors in Computing Systems)
- An Immersive 3D Navigation System Using 3D Gaussian Splatting(Ming-Yi Chen, I-Cheng Chang, Jinwei Chen, Bing Yang, Cun-Fang Wun, 2024, 2024 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan))
数字化展示、文化遗产保护与虚拟旅游
文献侧重于对真实物理环境或文化遗址进行数字化复原。应用场景包括虚拟博物馆、文化遗产的 Web 端导航、地质公园科普以及通过环境叙事提升旅游体验,强调文化传承与远程访问的便捷性。
- Three-Dimensional Documentation and Virtual Web Navigation System for the Indoor and Outdoor Exploration of a Complex Cultural Heritage Site(M. Aricò, G. Dardanelli, M. La Guardia, M. Lo Brutto, 2024, Electronics)
- 基于VR技术的奉节小寨天坑演化模拟系统的设计与实现(王 勇, 谭德军, 刘满乾, 朱钱洪, 杨柳清, 2019, 地球科学前沿)
- An Augmented and Virtual Reality based Application for Enhanced Campus Exploration(Gururaj K. S., Annapoorna C. L., Shreya G. S., Shrutha S. A., Smruti Hegde, 2024, International Journal of Innovative Science and Research Technology (IJISRT))
- Design and Implementation of Virtual Museum of Inkstone Culture Based on Unity3D(Di Fan, Hongyun Liu, Yujie Chen, Ying Chen, Ruishu Guo, Nongliang Sun, 2024, Proceedings of the 2024 7th International Conference on Computer Information Science and Artificial Intelligence)
- 砚文化虚拟博物馆的Unity3D设计(陈 瑛, 刘洪云, 范 迪, 陈玉杰, 郭瑞姝, 孙农亮, 2025, 计算机科学与应用)
- 基于Unity的智能交互虚拟文物展览馆(夏方方, 郭润甲, 吕镇宇, 刘芳丽, 郭子俊, 2023, 计算机科学与应用)
- Virtual Immersion in Underground Quarries: An Innovative Exploration of Historical and Cultural Heritage(Nicolas Bremard, M. Dubois, A. Gauthier, Florent Berthaut, 2025, 2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW))
- Design and Implementation of a Biological Virtual Display System Based on 3DMax and Unity3D(Junyao Bai, Caojia Xia, Zijun Zhou, Yan Zhu, 2024, 2024 2nd International Conference on Mechatronics, IoT and Industrial Informatics (ICMIII))
- Enhancing Tourism Experiences Through Immersive Technologies: The Role of Virtual and Augmented Reality(Sanchit Vashisht, Bhanu Sharma, Srinivas Aluvala, 2024, 2024 9th International Conference on Communication and Electronics Systems (ICCES))
- Research on Virtual Scene Design Methods for Unity3D Games(Jiwen Zhang, 2024, Proceedings of the 3rd International Conference on Art Design and Digital Technology, ADDT 2024, May 24–26, 2024, Luoyang, China)
- The Design and Implementation of Interactive Games Based on Unity3D(Jizheng He, 2025, 2025 IEEE 3rd International Conference on Sensors, Electronics and Computer Engineering (ICSECE))
- From Picks to Pixels: An Exploration of Virtual Reality in Geoscience Education(Jacob Young, Matthew Wood, Nadia Pantidi, Dene Carroll, J. Crampton, Cliff Atkins, 2025, 2025 IEEE Conference Virtual Reality and 3D User Interfaces (VR))
数字孪生、物联网集成与智能监控
这组文献展示了 Unity3D 在专业工程与智能系统中的深度应用,如智慧车间布局、室内设计可视化、以及结合传感器数据的物联网监控(如农业、植物养护),实现虚实交互与状态监控。
- Exploration of the Integration of Virtual Simulation and Visualization Technology in Interactive Landscape Design(Ting Jiang, Chao Jiang, 2024, 2024 International Conference on Interactive Intelligent Systems and Techniques (IIST))
- 超精密关节轴承车间虚拟仿真系统研究(吴 尽, 2025, 建模与仿真)
- Creating Immersive Digital Twins of Terrestrial Planetary Analogs With Multimodal Sensing and Game Engines for Virtual Exploration(Leonie Bensch, Cody Paige, D. D. Haddad, Fangzheng Liu, Nathan Perry, G. Olivier, J. Todd, J. Paradiso, 2025, IEEE Pervasive Computing)
- 基于VR技术的3D可视化智能家装系统设计与实现(詹梦军, 李雯昕, 罗嗣根, 李诗敏, 刘灵辉, 陈雨霏, 2021, 计算机科学与应用)
- Design and Application of Modern Interior Design Style System Based on Unity3D(Xiaoyu Chu, Rui Xu, Guangjun Wang, 2024, Journal of Electronic Research and Application)
- Development and application of virtual display system for interior decoration design based on Unity3D(Huatao Zou, 2025, No journal)
- 物联网 + Unity3D虚拟现实花卉养护远程智能监控系统(陈麒宇, 郭仁春, 张高健, 2016, 计算机科学与应用)
- 物联网智能浇灌控制系统(冯雨轩, 王圣玥, 杨丹丹, 郭仁春, 邢 杰, 2017, 计算机科学与应用)
- Digital Kitchen Remodeling: Editing and Relighting Intricate Indoor Scenes from a Single Panorama(Guanzhou Ji, A. Sawyer, Srinivasa G. Narasimhan, 2025, ArXiv)
- Unity3D-based conference room scene preparation and construction(Xiangyu Zhang, 2023, Applied and Computational Engineering)
第一人称射击(FPS)游戏开发与 AI 训练环境
侧重于 FPS 类型漫游游戏的底层技术实现,包括 AI 寻路、有限状态机、以及为强化学习智能体(Gaming AI)提供复杂的三维竞技测试环境和相机位姿数据集生成。
- Development of a first person shooter game controller(Robinson Diaz, John Prieto, J. Pardo, Camilo Ariza-Zambrano, Alvaro J. Uribe-Quevedo, Enit Godoy, B. Perez-Gutierrez, 2015, 2015 IEEE Games Entertainment Media Conference (GEM))
- Research and Implementation of Key Technologies of Android Platform Games Based on Unity3D(Yixin Chen, Zhuoxuan Shen, Feng Xiao, 2024, 2024 IEEE 4th International Conference on Electronic Technology, Communication and Information (ICETCI))
- Camera Pose Generation Based on Unity3D(Hao Luo, Wenjie Luo, Wenzhu Yang, 2025, Inf.)
- Hierarchical controller learning in a First-Person Shooter(N. V. Hoorn, J. Togelius, J. Schmidhuber, 2009, 2009 IEEE Symposium on Computational Intelligence and Games)
- Design and Implementation of “Winning Luding Bridge” Immersion FPS Game Based on Unity3D Technology(Xiaofeng Qiu, Huaqun Liu, Ke Ren, Mingyu Zhang, Huimin Yan, Yang Lu, 2021, 2021 5th International Conference on Artificial Intelligence and Virtual Reality (AIVR))
- Developing a Single-Player Sci-Fi First-Person Shooter in Unity3D(Keang Lyu, 2023, Highlights in Science, Engineering and Technology)
- WILD-SCAV: Benchmarking FPS Gaming AI on Unity3D-based Environments(Xi Chen, Tianyuan Shi, Qing Zhao, Yuchen Sun, Yunfei Gao, Xiangjun Wang, 2022, ArXiv)
- A Roaming Game in 3D Reconstructed Campus(Yunrui Zhu, Xun Luo, Rui Gao, Y. Wang, Chu Shi, 2018, 2018 International Conference on Virtual Reality and Visualization (ICVRV))
- VR Virtual Driving Game: Reinventing the Traffic Rules Learning Panorama(Jiaxian Li, 2025, Proceedings of the 2nd International Conference on Data Science and Engineering)
特殊人群辅助、健康促进与叙事创新
该组探索了漫游技术的公益与心理学价值。包括视障人士的语音导航、肢体残疾者的交互界面、通过自然漫游缓解压力以及在文字冒险或时间操纵机制中的叙事创新。
- VStroll: An Audio-based Virtual Exploration to Encourage Walking among People with Vision Impairments(Gesu India, Mohit Jain, Pallav Karya, Nirmalendu Diwakar, Manohar Swaminathan, 2021, Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility)
- Designing a First Person Shooter Game for Quadriplegics(Atieh Taheri, Ziv Weissman, 2021, Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems)
- Evaluating Freeform Creativity versus Goal-Directed Exercises in VR: An Examination of Immersion Levels and Effectiveness(Yu-Min Fang, 2023, 2023 IEEE 6th International Conference on Knowledge Innovation and Invention (ICKII))
- Phone-based virtual exploration of green space increases positive affect in students with test anxiety: a pre-post experimental study with qualitative insights(Alison O’Meara, Tadgh Connery, Jason Chan, Cleidi Hearn, M. Cassarino, Annalisa Setti, 2024, Virtual Reality)
- Research on roaming and interaction in VR game based on Unity 3D(Juan Wu, 2020, 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL))
- 基于Unity3D的文字冒险游戏的设计与开发(刘宁晖, 宋瑾钰, 2022, 软件工程与应用)
- ChronoShore: Diegetic Temporal Exploration in a Simulated Virtual Coast Environment(Yuen C. Law, Lucca Troll, Daniel Zielasko, 2024, Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology)
- An In-Depth Exploration of the Effect of 2D/3D Views and Controller Types on First Person Shooter Games in Virtual Reality(D. Monteiro, Hai-Ning Liang, Jialin Wang, Hao Chen, N. Baghaei, 2020, 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR))
- SONIA: an immersive customizable virtual reality system for the education and exploration of brain networks(Owen Hellum, Christopher M Steele, Yiming Xiao, 2023, ArXiv)
- Research on the Metaverse Model of a University Based on Unity(Dongmei Luo, Tengfei Zhang, Shanshan He, Wen Sun, Xueyou Sun, 2024, Proceedings of the 2024 International Conference on Computer and Multimedia Technology)
最终分组展示了 Unity3D 漫游游戏研究的立体结构:技术层由 AIGC 驱动的场景自动化生成和底层交互算法(碰撞、感知)支撑;应用层涵盖了教育实训、数字孪生、文化遗产保护等多元化领域;而在社会价值层,研究已延伸至特殊人群辅助与心理干预。整体趋势正从“单一的视觉漫游”向“多模态感知、智能化生成、跨行业融合”的深层次演进。
总计101篇相关文献
本项目以华北理工大学为研究对象,开发了一个具有三维校园展示、在线虚拟漫游及信息管理与服务等功能的虚拟校园漫游系统。该系统能够全方位地展示校园的各种环境,具有较强的交互性和沉浸感,用户在虚拟校园中漫游会有身临其境的感觉,对学校的形象宣传、信息化管理将产生重要的作用。该系统基于Unity 3D游戏引擎,利用三维建模技术,采用C#脚本语言进行交互开发,并融合计算机网络技术等,对三维场景建模及优化技术、角色与场景之间的交互技术、碰撞检测等进行深入研究。
本文针对科技发展导致砚文化传承发展受阻的问题,设计并实现了砚文化虚拟博物馆,使用3DsMax搭建中式文化风格的博物馆场景,设计基于MVC架构下的背包系统来实现博物馆馆藏砚台总体展示,设计ScriptableObject存储列表对馆藏砚台的具体信息及砚台模型进行轻量化存储,根据虚拟现实的碰撞检测技术、人机交互技术、动画系统设计实现了博物馆内场景漫游、人机交互、数据管理、文创游戏内容,从用户使用角度进行界面的交互设计,增加游戏化文创内容,实现砚文化数字化保护与传承。
通过虚拟现实技术还原奉节小寨天坑演化过程,对国家地质公园的地质遗迹景观和生态环境的保护、以及地质科学研究和地质知识普及提供新的宣传思路和研究手段。通过虚拟现实技术将声音、实景模型以全方位、沉浸式立体展示,通过场景交互使观看者更直观更真实的感受奉节国家地质公园雄伟奇壮的地质遗迹。
针对目前国内外家装环境仍处于传统家装,难以适应新一代家装体系结构需求,设计和开发出基于VR技术的3D可视化智能的家装系统。采用3DMAX建模技术,对户型和家具等产品进行创建,经过Photoshop设计的贴图对建模物体进行贴图操作,之后导出FBX文件,导入unity中并完成系统功能设计,最后接入VR设备进行功能测试和发布。该系统实现了用户对家装的自定义设计,通过对家具的摆放、移动、更改材质以及用户的场景漫游功能,最后用户通过VR设备构建的虚拟室内进行沉浸式体验,得益于不受时间和空间的限制,能最短设计和体验房产的优势,使得消费者、开发商和设计师利益最大化。
为解决实验室设备短缺及安全等问题,本文利用虚拟现实技术,运用3DS Max和SolidWorks完成了实验设备三维模型的建立,并借助Unity3D开发平台设计了磁选实验室系统。这一系统通过全面、多角度地模拟了磁选实验过程,构建了一个融合互联网、现场试验和虚拟实验的全新教学体系。该虚拟实验系统能够逼真地模拟磁选实验室的三维场景、仪器设备和操作流程,提升了实验的真实感,成功解决了在实验内容、时间、资源和费用等方面的局限性,激发和提高了学生的学习兴趣和效率。使学生可以沉浸式地操作虚拟磁选实验系统,有助于加深其对实验操作流程的记忆,从而提高教学效果。
针对传统花卉养护手工操作工作效率低、可交互性差、缺乏远程监控能力以及人机交互界面缺乏视觉感染力问题,采用Arduino开源硬件平台,结合Unity3D引擎移动开发平台,设计和开发了一种物联网 + Unity3D可交互智能化虚拟现实花卉养护远程监控系统。该系统由基站服务端和远程站客户端组成,基站服务端使用土壤湿度和DHT温湿度传感器检测花卉土壤湿度和室内环境温度和湿度,控制器根据不同花卉土壤湿度设定值,在环境温度允许情况下,调节电磁阀,控制浇水量。远程站客户端实现虚拟漫游,实时信息显示、远程控制功能。基站服务端和远程站客户端通过无线路由器进行数据传输和交换。系统测试表明本系统能够通过三维交互式虚拟现实场景远程监控花卉生长,自动调整浇水量,提高了花卉养护的实用性和趣味性。
本项目基于Unity3D这款多平台的综合游戏开发工具建立了一个虚拟的500 kV变电站倒闸操作仿真系统,该项目具有很强的实用性。通过使用Unity3D引擎和3dsMax软件,并结合C#和JavaScript脚本语言的开发,实现了虚拟500 kV变电站的场景设计和交互式漫游等功能。通过Unity3D游戏开发工具所设计的该仿真系统扩展性好,真实感强,可模拟各种变电站的电气设备分布。在虚拟变电站中进行模拟倒闸操作的训练,满足了对变电站倒闸操作的培训需要。
近年来,基于虚拟现实技术建设的博物馆建设逐渐成为了全球各大博物馆发展、政府投资、科研开发领域的热点。在数字时代,将VR技术应用于现代博物馆的建设之中具有重要的现实意义。虚拟文物展览馆作为传统博物馆的延伸改变了传统历史文物的陈列理念,虚拟博物馆不仅为用户提供了一种访问数字资源的方式,同时也提供了一种全新的展示和体验文化遗产的形式,还为文物保护提供了新的思路。本文区别于传统虚拟博物馆的构建方法提出了一种构建可智能交互的虚拟文物展览馆综合系统的方法,以可智能交互的虚拟文物展览馆作为研究背景,融入基于AI-driven的虚拟导游、显著性检测、眼动追踪等技术增强与用户的交互性,为用户带来更丰富的游览体验。同时为国家文化遗产和数字文化发展提供了一种全新的展示和保护手段,积极的推动文化资源的数字化发展。
随着智能制造的发展,虚拟仿真技术为车间布局优化与生产调度提供了高效的可视化解决方案。本文以超精密关节轴承车间为研究对象,基于Unity3D引擎开发了一套集成调度和布局优化功能的虚拟仿真系统,融合了遗传算法、标准NSGA-II及其改进算法,能够基于实际车间运行数据实现动态调度与实时布局优化。系统采用“建模–算法–交互”三层架构:建模层通过SolidWorks构建精细化设备模型,并结合3ds Max进行轻量化处理;算法层集成SLP-GA混合布局优化模型和NSGA-II等优化算法,实现车间布局与生产调度的多目标优化;交互层利用UGUI和C#脚本开发功能模块,支持布局重构、生产过程仿真及场景漫游。通过车间实例验证,系统成功实现了调度方案的可视化验证与布局优化结果的三维展示,显著提升了车间物流效率与生产协同性。研究成果为离散制造车间的智能化升级提供了技术路径,兼具理论价值与实践意义。
在传统螺旋桨敞水试验和空泡试验教学模式下,受限于各种条件的约束,学生只能在限定的时间和地点,利用限定的试验设备,观察特定的试验内容,极大地制约了对学生动手能力及工程创新能力的培养。而虚拟仿真技术的发展,克服了传统的试验教学模式短板,使得不受试验设备、时间和空间的智能教学系统得以实现。本文基于Unity 3D虚拟引擎和3ds Max三维建模软件搭建可视化交互场景,以空泡水筒为原型,利用Visual Studio Code软件,使用C#脚本语言为开发工具进行螺旋桨敞水试验和空泡试验虚拟仿真试验系统的搭建,实现了第一人称自由漫游,动手组装设备,零距离观察装置,试验数据分析等功能。船舶推进试验系统使学生更加了解螺旋桨敞水试验和空泡试验的原理、设备功能以及试验结果的应用价值等,实践能力和思考能力得以锻炼,同时该系统在很大程度上节省了教学资源,提高了教学质量和学习效率,在船舶教育教学中具有很强的应用前景和研究价值。
本研究采用自研动态八叉树网格细分算法实现材料任意方向去除仿真,并针对高校机械专业实训中存在的设备操作风险高、设备资源不足及教学效果有限等问题,在该算法的基础上,选取使用广泛的CA6140型卧式车床作为研究对象,基于Unity3D平台开发了一套高拟真度的虚拟加工装配仿真实训系统,涵盖车床加工仿真、运动逻辑与装配仿真、UI界面交互等核心模块。测试结果表明,该系统能够准确还原车削成形过程,支持场景漫游及拆解装配仿真训练,有效提升了实训的安全性与沉浸感。研究成果为高校提供了低成本、可重复的虚拟实训解决方案,对培养高素质智能制造人才具有重要的实践意义。
传统果园种植低效且工作繁重,物联网技术+传统果园种植的模式有利于提高果园管理效率。本文采用STM32系类单片机、2.4 G无线模块,结合Unity3D引擎移动开发平台,设计和开发了一种物联网+Unity3D可交互智能化虚拟现实果园种植远程监控控制系统。该系统由底层部分和顶层部分组成,底层部分设计使用土壤湿度传感器和空气温湿度传感器检测果园土壤温度和外部环境温度和湿度信息,控制器根据不同果树土壤湿度设定值,调节电磁阀,控制浇水量。顶层部分设计建立三维虚拟场景,实现场景漫游、实时监视、信息显示功能。底层部分与顶层部分建立协议,通过查询果树养殖专业信息设定智能浇灌,同时也建立远程手动控制浇灌,方便管理人员随时查看数据和远程控制浇灌,降低果园养护难度。
Unity作为一款被游戏开发者广泛使用的游戏引擎,其强大的功能有利于实现玩家沉浸式的交互体验。本文基于Unity开发引擎,设计并制作一款文字冒险游戏《剑》。利用3DSMAX进行三维场景的建模,使用Adobe Photoshop进行游戏素材的制作,在Visual Studio中利用C#代码进行游戏逻辑编写,将水墨画风与游戏设计进行巧妙地结合,配合多样性的交互,融入了三维场景漫游与语音交互技术,给玩家带来沉浸式的游戏体验。
高校实验室消防安全事故是近年来造成社会重大经济损失和人才损失的事故之一。传统的消防安全教育方式大部分以视频教育、安全宣讲会、实地演练等方式为主。然而,这种传统的消安全教育方式很难让人们提高对实验室安全事故的警惕并且耗费巨大的人力物力。本文提出以3dMax进行实验室模型搭建,使用Unity3D引擎设计安全事故场景,从而建立高校实验室消防演练系统。整个系统分为两个角色:学生,消防员。学生可通过系统中触发的应急措施按钮进行人机交互从而学习到基本的实验室安全知识。消防员可通过查看系统清楚地了解到实验室险情以及掌握最佳的救援路径。系统能够通过3D仿真模拟实验室发生危险的场景,增强用户的体验感,提高安全教育效果。
为了解决目前高校机械专业实验设备价格昂贵、数量少、安全性差等问题,本文构建了一套基于Unity3D技术,具有高仿真性、强交互性的冲压成形虚拟仿真实验系统。通过对模型的建立与优化、系统漫游、碰撞检测等关键技术的研究,设计了具有演示模式、操作模式、认知模式的虚拟冲压实验系统。该系统依据板料冲裁间隙实验的实验原理,详细地模拟了金属板料冲裁实验的全过程,实现了冲压实验过程演示、模具拆装展示、设备认知和实验操作,获得了较好的实验教学效果。
In this paper, in response to the development of science and technology leading to the development of inkstone culture heritage is hindered, the design and realization of the inkstone culture virtual museum, the use of 3DsMax to build a Chinese cultural style museum scene, design based on the MVC architecture under the backpack system to achieve the museum collection of inkstone overall display, the design of the ScriptableObject storage list for the collection of inkstone specific information and inkstone model for Lightweight storage, according to the virtual reality collision detection technology, human-computer interaction technology, animation system design to realize the scene roaming in the museum, human-computer interaction, data management, cultural and creative game content, to increase the gamification of cultural and creative content, to achieve the inkstone culture digital protection and inheritance.
Meta-universe model is the modeling of three-dimensional space, which uses virtual reality technology to realize the interactive operation between users and virtual environment. This article will construct a metaverse model of higher education based on the Unity3D game engine and C4D, and combine video game technology and virtual reality technology to improve the realism of the model. In the process of model construction, first, comprehensive photography and measurement of the campus environment of a certain university are carried out. Secondly, the campus scene is completely modeled through the graphical user interface (GUI) tools of Unity, including buildings, roads, classrooms, and laboratories, etc. Finally, the roaming function is added in the Unity3D game engine and the physical engine is used to simulate the elastic collision of rigid bodies, and the accuracy of the collision is verified through experiments to achieve the effect of third-person campus roaming. The educational meta-universe model can not only provide innovative teaching methods, but also serve as a platform for communication and collaboration between students and teachers.
Unity3D is a cross-platform game development engine, and its built-in application programming interface (API) functions combined with Visual Studio development tools can efficiently develop visual simulation systems. The purpose of educational games is to meet the psychological needs of players, and at the same time, the desire for satisfaction that players expect to achieve is transformed into the motivation to learn the educational content, and the learning effect is used as a channel to realize the satisfaction of psychological needs, which is a kind of psychological needs. The process of transforming with learning motivation. Based on the EGL framework and with the help of the Unity3D engine technology, this paper developed a 3D version of the animal kingdom roaming learning game.
Based on the real ship structure and equipment, the game engine is used to construct a three-dimensional visual model scene. Multigen Creator will be used to model, and after format conversion, the model will be driven by Unity 3D to complete the virtual ship platform construction. By introducing control logic such as roaming logic and navigation logic, the user is allowed to perform operations such as roaming and navigation in the virtual scene. The system has the characteristics of strong sense of reality, friendly interface, and interaction, which meets the needs of information teaching. With the cross-platform features of Unity3d, PC and mobile programming are respectively carried out to realize the network development of VR training platform.
The present study focuses on the development of the TB CARE game, with the aim of addressing the limited knowledge about tuberculosis (TB) in the community, particularly among children. Considering the digital native nature of generation Z, games are adopted as an appropriate medium for dissemination. Employing the observation method, specifically blackbox testing, the study evaluates the game's performance through tests and questionnaires. The findings indicate a significant success rate of approximately 93.5%, signifying the game's efficacy in accordance with the projected outcomes. The TB CARE game exhibits promising potential for future advancements in line with evolving science and technology Highlights: Development of engaging Unity3D FPS TB CARE game to address TB knowledge gap among children. Successful implementation of blackbox testing, with approximately 93.5% game feasibility. Promising potential for future advancements in line with evolving science and technology.
No abstract available
With the rapid development of Internet technology and the maturity of 5g technology, virtual reality gradually appears in the public vision, and its involved fields are also expanding. Using virtual reality technology and head mounted display, users’ immersion and authenticity can be improved to the greatest extent. In this paper, an immersive virtual campus roaming system is realized by using 3ds Max tool to create a model, unity 3D tool to build a scene, c# language to write human-computer interaction script, and action one headset device to take Nanchang Institute of technology as an example.
With the rise of games as the "ninth art" and the diversified development of interactive entertainment forms, creating a deep immersive experience has become one of the core goals of game design. However, existing interactive games often face problems such as insufficient interaction depth, fragmented narratives, and poor immersion, which limit the player's experience. To address this challenge, this study takes the immersive interactive game "Identity: Unknown" developed with the Unity3D engine as the research object, focusing on the optimization effect of interaction design on the player's immersion. By constructing a narrative framework centered on "a protagonist with amnesia exploring a closed room", core modules such as environmental physical interaction, multi-step item triggering, dynamic audio-visual effects, and exploratory plot were realized. The study innovatively adopted environmental narrative techniques (such as connecting the plot through interactive music players, tapes, cameras, etc.), and designed a control experiment to verify the effectiveness of interaction design: inviting 20 players to participate in two sets of control tests (simplified interaction version vs complete interaction version), and conducting quantitative analysis of satisfaction ratings. The results show that the optimized interaction design (such as layered sound effects, dynamic lighting, multi-step item interaction) significantly improved immersion scores, narrative coherence, and operational smoothness, proving that refined interaction elements can effectively enhance the immersive experience of the game. This study provides a quantifiable optimization path for interactive game design.
Immersion is the cornerstone of compelling 3D game experiences. However, many productions lack a solid conceptual framework and tight integration among these elements, resulting in fragmented experiences that fail to fully absorb players. This paper investigates systematic design approaches to enhance immersion in 3D games through cohesive integration of environmental detail, interactive challenge, and narrative devices. In this study, a first-person 3D prototype developed in Unity3D—set aboard a vintage train with a time-loop mechanic as a design highlight—serves to validate these approaches. The development pipeline employs Unity3D for real-time rendering and physics, Blender for high-fidelity asset creation, and Generative AI for rapid concept-art prototyping. Spatial audio, dynamic soundscapes, and puzzle-driven progression converge to amplify sensory, challenge, and imaginative immersion. Preliminary usability testing (n = 30) reveals significant increases in presence and engagement metrics. The study provides practical guidelines and a reusable methodology for developers aiming to craft deeply immersive 3D interactive narratives in constrained settings, thereby informing future immersion-focused game design.
This paper studies the design and implementation of an immersive FPS game based on Unity with the theme of flying over Luding Bridge. Combining virtual reality technology and the historical event of flying over Luding Bridge, in this thesis, we innovatively combine virtual reality technology with red culture, and use its immersion to enable users to experience the connotation of red culture firsthand.In this research, the 3dsMax modeling tool is used for scene restoration and Unity is used for interactive design, so as to achieve the effect of traveling through time and space. Reappearing the battle scene of flying over the Luding Bridge, allowing users to experience interactive actions such as shooting and laying planks from a first-person perspective. Users can observe the scene visually and use various body movements to deepen the impression. Feel the red history, technology helps culture
The theme of this paper is based on the analysis of the needs of the public for VR game experience. Combined with the application and development of virtual reality technology in the field of games in recent years, it is believed that a game based on Unity 3D can better meet the needs of the public, and users can feel the game in their own situation through virtual reality devices. This system is a first person shooting game, which realizes a series of interactive roaming functions, user attacking enemy, key switching weapon, enemy automatically searching for the way to find the user and attacking, picking up equipment, ect. This system has strong operability and interaction, which realizes the interaction and entertainment effect in the adventure game. In this paper, we mainly introduce the virtual reality technology to realize the virtual interactive roaming effect in VR game.
With the constant change of fashion trends, interior design styles are changing day by day. Based on Unity3D technology, this paper develops a system for modern interior-style design and application. Taking the residential interior as a case study, the interior style design is achieved through 3D modeling and texture rendering and then combined with the Unity3D engine to achieve scene roaming and interactive design. The system enables designers to express design concepts more intuitively and efficiently and also improves customer participation and satisfaction. Through the experience of designers and customers, the system is verified to have more practical value than traditional interior design solutions.
Addressing the abstract concepts and challenges in biology teaching that are difficult for students to grasp, we have developed a biological virtual display system using the latest 3D virtual display technology in conjunction with 3DMax and Unity3D. Biological models and venues are modelled using 3DMax software, while Unity3D is utilised to design various functional modules of the display system. This allows users to independently select biological models, view animations and textual displays, and freely roam within the exhibition hall, providing a simulated real-life 3D museum scene roaming experience. Finally, the display system is integrated into a webpage for easy user access and various interactive operations.
In order to enable users to get an immersive experience in the game environment, a VR interactive game design method based on Unity3D engine is proposed. According to the VR game design process and the interactive methods in the game, analyze the VR interactive game design concept, according to the data structure of the VR interactive game, combined with the state space search method to traverse, use the gravity acceleration component of the multi-axis acceleration sensor to calculate the pitch of the VR device Angle, according to the user's pitch angle, acceleration and gravity sensing information, real-time positioning of the user's head movement, controlling the position of the VR device in the game engine, and optimizing the rendering result of the VR game space structure, completing the Unity3D engine-based VR interactive game design. The experimental results show that the experience satisfaction of the VR interactive game design is higher than that of the traditional VR interactive game design, and it fulfills the requirements of obtaining an immersive experience in a virtual reality game environment.
For the situation that some knowledge points of image compression coding are difficult to understand and the learning efficiency is low in traditional teaching, this system designs and implements a game-based teaching system based on virtual reality technology, which is oriented to complete the basic theoretical knowledge of image compression coding and the experimental operation of image compression coding methods, and creates a virtual visualization and scenario-based image compression coding teaching environment with the help of the navigation system of the Unity3D platform, the particle system, and animation system and other technologies. With the help of Unity3D platform navigation system, particle system, animation system and other technologies, we create a virtual visualization and scenario-based image compression coding teaching environment, which can show students a more intuitive and three-dimensional image compression operation method than the traditional teaching mode and establish a multi-dimensional and effective teaching scenario. Students can participate in the operation process of the image compression method independently, and seek solutions to problems in the infinite possibilities of the virtual scene, which can fully stimulate students' creativity, improve learning efficiency and achieve better learning results.
. This article delves into the design methods of game virtual scenes based on the Unity3D platform, focusing on scene construction, lighting systems, material applications, and advanced pathfinding algorithms. The article particularly emphasizes the core role of project structure and file management in efficient development, and deeply analyzes the optimization techniques of resource import, material innovation, and lighting design, especially the detailed adjustment of directional light sources, point light sources, and area light sources. In addition, the article also explores the synergy between physics engines and lighting effects, and systematically analyzes the advanced pathfinding algorithm based on Navigation Mesh, demonstrating its practical application and effectiveness in dynamic gaming environments. Through a series of experimental designs and verifications, the article demonstrates the application value of these comprehensive technologies in creating a realistic, efficient, and immersive gaming experience. Based on these research findings, this article provides practical guidance and inspiration for game designers to develop high-quality game scenes using Unity3D.
In this paper, the key technologies of Android platform game based on Unity3D are studied and implemented. Users can control the characters in the game and interact with the content in the game on the Android platform. The core agent of the game consists of functional modules such as protagonist motion control, physical collision detection, animation controller, game scene layout, camera control, game character AI control, finite state machine, and game subject control. The motion control module of the protagonist mainly uses the physics engine of Unity3D, and realizes the state change of the protagonist by writing the corresponding C# script. The unique animation components of Unity3D are used to realize the animation effect by playing the material frame by frame, and the switch between the animations is realized through the animation controller. The camera control adopts the principle of orthogonal camera, and uses relevant plug-ins to realize the basic functions of following and stopping the game screen. Screen post-processing Bloom, the screen post-processing of URP general rendering pipeline, is used to improve the picture of the game. Abstract class inheritance and finite state machine are used to control the AI of game characters.
Introduction This paper presents SimNav-XR, an extended reality platform that integrates XR technologies with modern robotics frameworks to support mobile robot simulation and development. Methods By connecting ROS2's communication infrastructure with Unity3D's rendering and XR capabilities through the ROS-TCP-Connector package, SimNav-XR provides a practical bridge between robotics middleware and game engine environments for visualization and testing. The platform implements components for physics-based robot modeling, LiDAR and IMU sensor simulation, environmental interaction dynamics, and XR interfaces supporting both Virtual Reality (VR) and Mixed Reality (MR) modes. These capabilities create interactive environments where developers can visualize and control simulated robots through immersive interfaces using the Meta Quest 3 headset with controller-based input. Results Experimental evaluations using established platforms (Turtlebot3 and ROSbotXL) demonstrate the framework's capabilities across virtual testing scenarios, showing successful autonomous navigation with obstacle avoidance and simultaneous localization and mapping (SLAM). The VR mode provides fully immersive virtual environments for development and testing, while the MR mode uses passthrough cameras to overlay virtual robots onto real-world surfaces via plane detection. Discussion XR visualization techniques provide insights into robot sensor data and navigation behavior, supporting robotics development and education through accessible simulation environments.
This paper presents a virtual reality remotely-operated vehicle (ROV) Pilot Simulator using an open source game engine to decrease the development cost and time. The primary element in carrying out underwater missions in a hostile environment lies within the skills and experience of an ROV pilot. Training for ROV pilot is essential to prevent damage to expensive field equipment during the real operations. The proposed simulator differs from the existing simulators in the market is the use of modern game engine software to develop a “serious game” for ROV pilot trainee at much lower cost, mobility and shorter time-to-market. The results revealed that proposed virtual simulator software using Unity3D game engine could develop a high fidelity virtual reality training for the underwater operation.
Creating cross-reality applications of planetary analog environments supports scientific exploration and mission planning by offering a safe and cost-effective way to explore remote terrains. We present a pipeline that integrates physical and virtual data through 3D reconstruction, environmental sensing, and interactive real-time rendering in game engines. The approach was validated at two analog sites in Svalbard, Norway, and Lanzarote, Spain, using UAV photogrammetry, smartphone LiDAR, RGB imagery, and environmental and seismic sensors. In Svalbard, we reconstructed water-indicating terrain in Unity3D. In Lanzarote, we visualized a lava tube with integrated seismic and atmospheric data in Unreal Engine. The environments are explorable in both desktop and VR modes. By combining consumer hardware with multimodal sensing, we demonstrate a flexible method for generating immersive digital twins. We discuss low-cost tools for analog fieldwork, outline design considerations for integration and visualization, and provide recommendations for future cross-reality deployments in science and exploration contexts.
In order to enable the students to experience the actual engine room working environment, have a full understanding of the Marine ship equipment, and have the same operating experience as the working environment in the real ship cabin to improve the effect of education and training, the rudder cabin of a Dalian ocean 10000-ton super large oil tanker “Yuanshan lake” is used as the virtual design object. First, the mathematical model of the rudder is set up on the basis of introducing the related concepts of the ship system. Then, the layout of the actual cabin is analyzed, and a complete three-dimensional model of the rudder cabin is established with 3ds Max. Finally, the three-dimensional model of the cabin is introduced into the virtual reality engine Unity3D, which realizes the interactive operation of the cabin equipment. An extended user interface is designed to complete the generation and release of the final virtual scene program. The research and design of this subject has certain reference significance in applying virtual reality technology to the various different compartments of the whole ship and broadening its application to different ship types.
Introducing a ground breaking Augmented Reality(AR)-Virtual Reality (VR) campus tourexperience developed on the Unity3D platform and optimized for deployment on Oculus Meta Quest headsets. This immersive application revolutionizes the traditional campus visit, allowing users to explore campus buildings and amenities from their smartphones. Through high- quality 360-degree films, interactive 3D models, and engaging components, prospective students, parents, and visitors gain an in- depth understanding of the campus environment. By learning to develop and utilize this technology, individuals open doors to diverse career opportunitiesin fields such as virtual reality development, augmented reality design, and immersive experience creation. Moreover, the application enhances decision- making for prospective students and strengthens their connection to the institution, setting a new standard for campus exploration experiences while also fostering innovation and creativity in educational technology.
This study examines the influence of immersive technologies, particularly Virtual Reality (VR) and Augmented Reality (AR), on the tourism industry, emphasizing recent developments and the obstacles encountered in their integration. VR and AR deliver transforming experiences through virtual tours, interactive guides, and augmented on-site information, so redefining the marketing, exploration, and enjoyment of destinations. Current systems encounter obstacles like elevated development expenses, technical intricacies in attaining realistic and engaging user experiences, and constraints in accessibility and scalability across many platforms. This study emphasizes cutting-edge technologies such Unity3D, Vuforia, and Blender 3D, elucidating their functions in the creation of a VR and AR application aimed at enhancing tourism experiences. A framework is designed to address these problems, delivering scalable, user-friendly, and visually immersive applications that may establish new benchmarks for accessibility and environmental sustainability in tourism. The findings illustrate the potential of VR and AR to enhance travel experiences by making them more engaging, accessible, and sustainable, highlighting the necessity for continuous study and innovation to fully realize their capabilities within the tourism sector.
The new framework of infrastructure standards for education informatization had been constructed with four levels of digital base, application scenario, system specification and goal guidance, which was combined with the concept of Metaverse in order to improve the quality of ideological and political courses in colleges and universities. The modern history review simulation system was designed by Unity3D game engine and taken as an example, in which C# as the game logic programming language, Node.js as the server programming language, and MySQL as the game database. The path to improve the quality of ideological and political courses had been explored with six key features of new network, new platform, new resources, new campus, new application and new security. The red education resources were introduced into the lessons, and then the transformation of red resources can be realized by the digital technology. It can be seen from the experimental test result, the excellent teaching effect was achieved.
Despite significant progress in computer vision technology, the development of realistic, real-time scene rendering via neural radiation fields remains hindered by high computational demands and time-intensive rendering processes. This limitation makes such systems impractical for applications requiring rapid feedback. The study proposes an immersive navigation system utilizing 3D Gaussian splatters to facilitate high-quality dynamic rendering. This system seamlessly integrates various technologies, including Unity3D, ChatGPT, Whisper, and voice generators, and is equipped with the HTC VIVE Pro, offering users an unparalleled exploration experience. With potential applications in museums, art galleries, shopping malls, and gaming, this system holds the prospect of unveiling new levels of realism and interactivity in virtual environments.
As virtual reality (VR) continues to evolve, the study on user interaction within the VR environment becomes increasingly important. Thus, we investigated two interactions "Exploration and Browsing" and "Purpose and Guidance," using VR platforms to compare immersion levels and exercise effectiveness. We analyzed the results from 33 participants using Gravity Sketch 3D for freeform creativity, and Unity3D generated program for goal-directed exercises. We assessed four factors of 'concentration,' 'enjoyment,' 'sense of time distortion,' and 'control' for immersion levels and self-recognized exercise intensity, and participants' preference between the two VR formats. Preliminary results indicated that Freeform Creativity scored high overall in Immersion Levels and Exercise Effectiveness compared to Goal-Directed Exercises. The data also suggested that when participants used creative VR tools for exploration and browsing, stronger Immersion Levels were triggered, which in turn, led to more exercise. However, providing participants with exercise goals did not guarantee a higher level of exercise.
Nature confers a host of benefits including recovering from stress, replenishing attentional resources, improving mood, and decreasing negative thinking. Virtual nature, i.e. exposure to natural environments through technological means, has proven to also be efficacious in producing benefits, although more limitedly. Previous studies with immersive virtual reality with university students have shown that one bout of virtual nature can reduce negative affect in students with high test anxiety and can reduce feeling of worry and panic after several weeks of daily exposure. The present study aimed at replicating the effect of one bout of virtual nature on affect and extend it to cognition in a sample of university students with different levels of test anxiety. An inexpensive goggle + phone apparatus was utilized and the one bout of virtual nature was self-administered. 48 university students took part in the study, randomized between viewing a 360 degrees video of nature or of an urban environment. They completed the Positive and Negative Affect Schedule and the Cognitive Reflection Test before and after the exposure to the virtual environments and responded to open-ended questions about their experience of the intervention. Results showed improvements in positive affect in students with higher anxiety were obtained in the nature condition, no other effects were found. Qualitative appraisal indicated that participants in the nature condition felt more relaxed and focused, however the technical issues were detrimental to the benefits. In conclusion one bout of virtual nature could support students with higher test anxiety when confronted with examinations.
With the progress of science and technology and the improvement of people's living standards, the interior decoration design industry has ushered in a critical period of digital transformation and innovation. Especially in the display of interior decoration design, this change is particularly prominent. The traditional way of interior decoration design display is mostly plane renderings, which can simply and conveniently show the expected design effect, but it is difficult to meet the actual needs of repeated modification and complete experience. In this regard, this study will focus on the application effect of virtual reality technology in interior decoration design, and put forward a development scheme of virtual display system for interior decoration design based on Unity3D technology, in order to improve design quality and shorten design cycle. Practice has proved that the virtual display system can transform the traditional two-dimensional plane renderings into three-dimensional models and form virtual simulation scenes, which not only can give consumers an immersive interactive experience, but also provide new design tools and ideas for personalized customized design, and realize the data-driven design decision.
In order to improve the efficiency of experimental teaching in colleges and universities and the comprehensive ability of students, this paper constructs a virtual simulation experimental teaching system based on Unity3D. It adopts modular architecture design, integrates scene construction, interaction logic, intelligent feedback and evaluation and other key technologies to complete the optimization of teaching content and process implementation. The effectiveness of the system in improving teaching quality and reducing resource consumption is verified through the case of “Mechanics of Materials Experiment”. The study shows that the system not only improves the efficiency of the experiment, but also significantly enhances the practical ability and innovative consciousness of students, which provides theoretical and practical support for the wide application of virtual simulation experiment in education.
Current infrastructure design, discouragement by parents, and lack of internal motivation act as barriers for people with visual impairments (PVIs) to perform physical activities at par with sighted individuals. This has triggered accessible exercise technologies to be an emerging area of research. However, most current solutions have either safety concerns and/or are expensive, hence limiting their mass adoption. In our work, we propose VStroll, a smartphone app to promote walking among PVIs, by enabling them to virtually explore real-world locations, while physically walking in the safety and comfort of their homes. Walking is a cheap, accessible, and a common physical activity for people with blindness. VStroll has several added features, such as places-of-interest (POI) announcement using spatial audio and voice input for route selection at every intersection, which helps the user to gain spatial awareness while walking. To understand the usability of VStroll, 16 participants used our app for five days, followed by a semi-structured interview. Overall, our participants took 253 trips, walked for 50.8 hours covering 121.6 kms. We uncovered novel insights, such as discovering new POIs and fitness-related updates acted as key motivators, route selection boosted their confidence in navigation, and spatial audio resulted in an immersive experience. We conclude the paper with key lessons learned to promote accessible exercise technologies.
In this work we present our system for teaching practical geology field skills through a combination of 360° video, photogrammetry, and virtual content. The system was evaluated with first- and second-year undergraduate geoscience students to determine if it was effective in teaching practical skills that could be transferred to the real world. Second-year students who had performed the task before saw a significant improvement in their abilities, however this improvement was absent in the first-year students, suggesting the tool may be more effective for revision rather than first-time learning. We discuss these findings and their implications for future virtual training tools, as well as the challenges in developing and deploying such systems in a university environment.
Underground quarries, as essential elements of cultural heritage, present unique challenges in terms of mediation, both from a safety and accessibility perspective. The use of virtual reality (VR) in this field allows for immersion in complex underground environments, thus providing an innovative solution to showcase them to the general public. This document presents a virtual reality installation dedicated to underground quarries, explaining its purpose, objectives, and possibilities.
The spread of new survey strategies for the documentation and 3D reconstruction of complex cultural heritage sites enables the implementation of virtual web navigation systems that are useful for their virtual fruition. In particular, remote indoor/outdoor exploration enhances our knowledge of cultural heritage sites, even in inaccessible or difficult-to-visit states. However, the 3D data acquisition of complex sites for documentation remains a challenge, and the 3D virtual exploration of these datasets is often limited to property software implementations. This work describes the 3D documentation and construction of an indoor/outdoor web visualization system based on the WebGL open-source technology of a complex cultural heritage site. The case study regards the complex of “Santa Maria della Grotta” in Marsala (Italy), which is composed of a church that is located mostly underground and is connected to a human-dug hypogea on the site of a Punic necropolis. The aim of the work was to obtain detailed 3D documentation of the indoor and outdoor spaces through the integration of mobile laser scanning and aerial photogrammetry survey, and to develop a virtual web navigation system for the remote exploration of the site. The indoor/outdoor web navigation system provides users with a simple, web-browser-based 3D visualization, enabling the dissemination of the monuments’ knowledge on the web through an economically sustainable solution based on open-source technologies.
Virtual reality is the way of the future. The use of virtual reality is expanding over time across all sectors, from the entertainment industry to the military and space. VREd is a similar concept where a virtual reality-based classroom is used for online education where the user will have better interaction and more control. Unity3D and WebGL software have been used for implementation. Students or learners accustomed to contemporary technologies may find the traditional educational system unappealing because of its flaws. Incorporating the latest technologies can increase the curiosity and learning abilities of students. The system architecture of VREd is similar to that of an actual classroom, allowing both students and teachers to access all of the course materials and interact with one another using only an internet connection. The environment and the background are also customizable. Therefore, all the users can comfortably use the system and feel at home. We can create an effective educational system that raises educational quality by utilizing virtual reality.
The work aims to improving the current situation that the content and mode of assembly training for coal mine drilling rigs in mechanical workshops are low level of visualization, which leading to low training quality. Combined with the modeling advantages of UG and 3DS Max animation software, the 3d model reconstruction and texture mapping processing methods were discussed, and the model structure tree was built. Assembly system was achieved by c # language to write the core of the script, to study the related assembly animation path control, and design the system UI interface. Based on Unity3D virtual engine platform, a set of system for coal mine drilling rigs is developed and implemented from the perspective of the first person. The HTC VIVE Pro external headset is used to dock the system for debugging, operation and final release, and man-machine interaction operation could be realized. The research results not only show the complete assembly process of the mechanical assembly workshop, but also give the trainees novel and vivid teaching experience through friendly man-machine interaction, and preliminary exploration of virtual reality technology in coal mine drilling rig intelligent direction interact with remote control. At the same time, it also provides a new method reference for the market promotion and application of coal mine drilling rig.
In order to solve the problem of broken colliders caused by the fact that the object models for interaction are not constructed in one piece, but are built in pieces in the Unity3D macro virtual scene, this paper proposes a collider combining method, which enables the combining of colliders of objects in the scene, so that when the effective part of the object to be interacted with is clicked, the click of the object will be triggered event to display the data information. Finally, the collider combining method is applied in the virtual industrial park, and the colliders of the models to be interacted with in the virtual industrial park are successfully combined. The results show that this method can effectively interact with the target object model.
With the rapid development of science and technology, it is difficult for traditional landscape design methods to fully show the interactivity and practical effect of design schemes. Therefore, by integrating data platform, optimizing interface design and improving rendering algorithm, this study realized the deep integration of virtual simulation and visualization technology in interactive landscape design. This fusion method not only improves the design efficiency and optimizes the design effect, but also significantly enhances the interactive experience of users. In the aspect of data integration and sharing, this paper constructs a unified data platform, realizes the data exchange between virtual simulation and visualization technology, and ensures the accuracy and consistency of data. In the aspect of interface integration, this paper designs a friendly human-computer interaction interface, which supports a variety of input devices and interaction methods and improves the user experience. In addition, an efficient rendering algorithm is proposed in this study, which is based on ray tracing technology and has been optimized to improve the fidelity of the scene and the real-time response ability of the system. This paper takes an interactive landscape design project of a city park as an example, and realizes the technical realization path of virtual simulation and visualization technology through tools such as Unity3D and Tableau. Constructing 3D model database, realizing dynamic interactive function and integrating multi-source data for visual display have successfully provided a novel and intuitive display method for urban park interactive landscape design projects. This will not only help designers better understand and optimize the design scheme, but also improve public participation and satisfaction.
This paper introduces ChronoShore, an immersive virtual reality (VR) experience designed to explore diegetic time manipulation mechanics within a semi-realistic coastal environment. Traditional 2D video scrubbing methods fall short in immersive settings, particularly for understanding time-bound processes such as simulations of geology or biology. ChronoShore addresses this by allowing users to interact with celestial bodies to dynamically control and experience the passage of time, currently showcasing different weather events and atmospheric phenomena.
In today's education landscape, the convergence of sustainable internationalization and digital proficiency is of heightened importance, particularly within sectors marked by extensive globalization, exemplified by biomedical laboratory science (BLS). The presence of foreign health and social workers in Norwegian hospitals accentuates the urgency of effective cross-cultural communication. The aftermath of the 2020 Coronavirus Pandemic further underscores the imperative of innovation, efficiency, and internationalization in medical laboratory science education. Addressing the challenges in current BLS education, this paper proposes a transformative approach that uses virtual or simulated environments to seamlessly bridge educational and workplace contexts. Through simulations, serious games, and virtual reality, educators can provide authentic, up-to-date learning experiences with heightened engagement. From physics to engineering, various disciplines already harness simulation-based learning for skill development. Within medical education, simulations advance diagnostic and technical competencies. This paper introduces a Unity3D-powered virtual biomedical lab, offering immersive technical process learning that augments understanding of workflows and equipment operations, promising to reshape biomedical laboratory science education profoundly.
Virtual environments are often explored standing up. The purpose of this work is to understand if standing exploration has an advantage over seated exploration. Thus, we present an experiment that directly compares subjects’ spatial awareness when locomoting with a joystick when they are physically standing up versus sitting down. In both conditions, virtual rotations matched the physical rotations of the subject and the joystick was only used for translations through the virtual environment. In the seated condition, users sat in an armless swivel office chair. Our results indicated that there was no difference between the two conditions, sitting and standing. However, this result is interesting and might compel more virtual environment developers to encourage their users to sit in a comfortable swivel chair. As an additional finding to our study, we find a significant difference between the performance of males versus females and gamers versus non–gamers.
While mastery of neuroanatomy is important for the investigation of the brain, there is an increasing interest in exploring the neural pathways to better understand the roles of neural circuitry in brain functions. To tackle the limitations of traditional 2D-display-based neuronavigation software in intuitively visualizing complex 3D anatomies, several virtual reality (VR) and augmented reality (AR) solutions have been proposed to facilitate neuroanatomical education. However, with the increasing knowledge on brain connectivity and the functioning of the sub-systems, there is still a lack of similar software solutions for the education and exploration of these topics, which demand more elaborate visualization and interaction strategies. To address this gap, we designed the immerSive custOmizable Neuro learnIng plAtform (SONIA), a novel, user-friendly VR software system with a multi-scale interaction paradigm that allowed flexible customization of learning materials. With both quantitative and qualitative evaluations through user studies, the proposed system was shown to have high usability, attractive visual design, and good educational value. As the first immersive system that integrated customizable design and detailed narratives of the brain sub-systems for the education of neuroanatomy and brain connectivity, SONIA showcased new potential directions and provided valuable insights regarding medical learning and exploration in VR.
First-Person Shooter is currently one of most popular game genres. However, most of the shooters are realistic and multiplayer oriented. Besides, there’s no single-player oriented FPS framework made for Unity3D which remains a small blank in Sci-Fi single player FPS game area in recent years. So, this paper developed a game named What Happened to Site-13? using Unity3D which is a realistic Sci-Fi FPS which is single-player oriented. The game contains a gamepad-oriented FPS Controller, Story-telling module, AI Module, an in-game map editor and scene with realistic sci-fi art style. Through tests with testers, to test if the game is immersive, and the combat loop is comfortable, the visuals meet expectations. After tests and validation, it was proved to achieve target art style, visual experience and combat loop, it also was proven to be immersive. Also, it can serve as a framework for FPS games after validation with a group of students.
Virtual Reality (VR) has become widespread in the gaming industry, but the high cost of VR devices makes it difficult for gamers to own one. This research provides a solution by creating a new First-Person Action game that has Motion Controller using the gyroscope sensor on smartphones. A game with adaptive difficulty is necessary, as playing a game that is too easy or difficult can lead to frustration and boredom. This research applies the Fuzzy method to implement the adaptivity of the enemy to the player. The Fuzzy system will model the player's ability based on their performance in a level and produce an impact on the enemy in the next level. We produced a simple VR game on a smartphone with a Fuzzy system that automatically adapts the difficulty by using “Resistance” as the enemy’s new life. As a result, 90.5% of the 14 respondents that faced the difficulties in each of the levels are being adjusted by the game. Even though the experiences of each respondent are different, most respondents can intuitively play the game without asking for help.
The amount of interest in Virtual Reality (VR) research has significantly increased over the past few years, both in academia and industry. The release of commercial VR Head-Mounted Displays (HMDs) has been a major contributing factor. However, there is still much to be learned, especially how views and input techniques, as well as their interaction, affect the VR experience. There is little work done on First-Person Shooter (FPS) games in VR, and those few studies have focused on a single aspect of VR FPS. They either focused on the view, e.g., comparing VR to a typical 2D display or on the controller types. To the best of our knowledge, there are no studies investigating variations of 2D/3D views in HMDs, controller types, and their interactions. As such, it is challenging to distinguish findings related to the controller type from those related to the view. If a study does not control for the input method and finds that 2D displays lead to higher performance than VR, we cannot generalize the results because of the confounding variables. To understand their interaction, we propose to analyze in more depth, whether it is the view (2D vs. 3D) or the way it is controlled that gives the platforms their respective advantages. To study the effects of the 2D/3D views, we created a 2D visual technique, PlaneFrame, that was applied inside the VR headset. Our results show that the controller type can have a significant positive impact on performance, immersion, and simulator sickness when associated with a 2D view. They further our understanding of the interactions that controllers and views have and demonstrate that comparisons are highly dependent on how both factors go together. Further, through a series of three experiments, we developed a technique that can lead to a substantial performance, a good level of immersion, and can minimize the level of simulator sickness.
No abstract available
Immersion has become an important factor for video games. This study investigates the effect that haptic feedback has on the perceived immersion of the player in two different setups; one with haptic feedback in the game controller and one with feedback in a haptic vest. Both experiments consisted of a user test, followed by answering a questionnaire. The results show tendencies of haptic feedback both increasing and inhibiting the ability to feel immersed by certain metrics, even if the statistical analysis shows no significant difference between the groups in any of the sub-scales. The results also show that most of the test subjects thinks that the vest and its’ haptic feedback delivers more immersion to the gaming experience. The conclusion that can be drawn from both experiments is that haptic feedback improves the user feeling of immersion, more specifically regarding the players awareness of the surroundings in the game.
Video games are often about being able to do things that are not possible in real life, about experiencing great adventures and visiting new places. Yet, as prolific as gaming is, it is inaccessible to a significant number of people with neuromuscular diseases who are unable to play games with traditional input methods like game controllers or keyboard and mouse combinations. While primarily used for entertainment in the early days, gaming now provides the possibility of countering social isolation and connecting with others through multiplayer games, online gaming communities and game streaming. In our work, we explore how facial expression recognition can be harnessed to provide quadriplegic individuals a way to play games independently and without complex mouth controller devices. We demonstrate our input interface with the design of a first person shooter game.
No abstract available
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be effective in supporting 3D navigations. This paper aims to examine whether a smartphone-based control is a reliable alternative to mid-air interaction for four degrees of freedom (4-DOF) fist-person navigation, and to discover suitable interaction techniques for a smartphone controller. For this purpose, we setup two studies: A comparative study between smartphone-based and Kinect-based navigation, and a gesture elicitation study to collect user preferences and intentions regarding 3D navigation methods using a smartphone. The results of the first study were encouraging, as users with smartphone input performed at least as good as with Kinect and most of them preferred it as a means of control, whilst the second study produced a number of noteworthy results regarding proposed user gestures and their stance towards using a mobile phone for 3D navigation.
No abstract available
No abstract available
Background The virtual reality (VR) content market is rapidly growing due to an increased supply of VR devices such as head-mounted displays (HMDs), whereas VR sickness (reported to occur while experiencing VR) remains an unsolved problem. The most widely used method of reducing VR sickness is the use of a rest frame that stabilizes the user's viewpoint by providing fixed visual stimuli in VR content (including video). However, the earth-fixed grid and natural independent visual background that are widely used as rest frames cannot maintain VR fidelity, as they reduce the immersion and the presence of the user. A visual guide is a visual element (eg, a crosshair of first-person shooter [FPS]) that induces a user's gaze movement within the VR content while maintaining VR fidelity, whereas there are no studies on the correlation of visual guide with VR sickness. Objective This study aimed to analyze the correlation between VR sickness and crosshair, which is widely used as a visual guide in FPS games. Methods Eight experimental scenarios were designed and evaluated, including having the visual guide on/off, the game controller on/off, and varying the size and position of the visual guide to determine the effect of visual guide on VR sickness. Results The results showed that VR sickness significantly decreased when visual guide was applied in an FPS game. In addition, VR sickness was lower when the visual guide was adjusted to 30% of the aspect ratio and positioned in the head-tracking direction. Conclusions The experimental results of this study indicate that the visual guide can achieve VR sickness reduction while maintaining user presence and immersion in the virtual environment. In other words, the use of a visual guide is expected to solve the existing limitation of distributing various types of content due to VR sickness.
The main purpose of virtual reality (VR) is to enhance realism and the player experience. To do this, we focus on VR interaction design methods, analyze the existing interaction solutions including both accurate and rough interaction methods, and propose a new method for creating stable and realistic player interactions in a first-person shooter (FPS) game prototype. In this research, we design and modify the existing mapping methods between physical and virtual worlds, and create interfaces such that physical devices correspond to shooting tools in virtual reality. Moreover, we propose and design prototypes of universal interactions that can be implemented in a simple and straightforward way. Proposed interactions allow the player to perform actions similar to those of real shooting, using both hands such as firing, reloading, attaching and grabbing objects. In addition, we develop a gun template with haptic feedback, and a visual collision guide that can optionally be enabled. Then, we evaluate and compare our methods with the existing solutions. We then use these in a VR FPS game prototype and conduct a user study with participants, and the resulting user study proves that the proposed method is more stable, player-friendly and realistic.
Virtual Reality (VR) has enabled novel ways to study embodiment and understand how a virtual avatar may be treated as part of a person’s body. These studies mainly employ virtual bodies perceived from a first-person perspective, given that VR has a default egocentric view. Third-person perspective (3PP) within VR has positively influenced the navigation time and spatial orientation in large virtual worlds. However, the relationship between VR locomotion in 3PP and the sense of embodiment in the users remains unexplored. In this paper, we proposed three VR locomotion techniques in 3PP (controller joystick, head tilt, arm swing). We evaluated them in a user study (N=16) focusing on their influence on the sense of embodiment, perceived usability, VR sickness, and completion time. Our results showed that arm swing and head tilt facilitate higher embodiment than a controller joystick but lead to higher completion times and oculomotor sickness.
No abstract available
No abstract available
No abstract available
Recent advances in deep reinforcement learning (RL) have demonstrated complex decision-making capabilities in simulation environments such as Arcade Learning Environment, MuJoCo, and ViZDoom. However, they are hardly extensible to more complicated problems, mainly due to the lack of complexity and variations in the environments they are trained and tested on. Furthermore, they are not extensible to an open-world environment to facilitate long-term exploration research. To learn realistic task-solving capabilities, we need to develop an environment with greater diversity and complexity. We developed WILD-SCAV, a powerful and extensible environment based on a 3D open-world FPS (First-Person Shooter) game to bridge the gap. It provides realistic 3D environments of variable complexity, various tasks, and multiple modes of interaction, where agents can learn to perceive 3D environments, navigate and plan, compete and cooperate in a human-like manner. WILD-SCAV also supports different complexities, such as configurable maps with different terrains, building structures and distributions, and multi-agent settings with cooperative and competitive tasks. The experimental results on configurable complexity, multi-tasking, and multi-agent scenarios demonstrate the effectiveness of WILD-SCAV in benchmarking various RL algorithms, as well as it is potential to give rise to intelligent agents with generalized task-solving abilities. The link to our open-sourced code can be found here https://github.com/inspirai/wilderness-scavenger.
We investigate using a prop to control human-like avatars in virtual environments while remaining seated. We believe that manipulating a tangible interface, capable of rendering physical sensations and reproducing the movements of an avatar, could lead to a greater virtual experience (presence) and strengthen the relationship between users and the avatar (embodiment) compared to other established controllers. We present a controller based on an instrumented artist doll that users can manipulate to move the avatar in virtual environments. We evaluated the influence of such a controller on the sense of presence and the sense of embodiment in 3 perspectives (third-person perspective on a screen, immersive third-person perspective, and immersive first-person perspective in a head-mounted display). We compared the controller with gamepad controllers to control the movements of an avatar in a kick-in-a-ball game as illustration. The results showed that the prop-based controller can increase the sense of presence and fun in all three perspectives. It also enhances the sense of embodiment in the immersive perspectives. It could therefore enhance the user experience in various simulations involving human-like avatars.
Paper introduces the virtual reality laboratory system based on virtual reality technology. Using the Unity3D engine, the robotic arm is used as the research object to develop the virtual simulation teaching system. The system is released on the PC side and applied to the teaching and training of the mechanical arm structure. Establish a virtual laboratory to realize the first person perspective roaming, on-the-spot observation robot arm structure, assembly virtual interaction simulation, intelligent automatic assembly simulation, solve complex mechanical structure cognitive teaching and assembly planning problems. The application of powerful virtual reality technology to engineering technology teaching has improved the teaching effect and teaching efficiency of mechanical structure cognition, structural assembly training, curriculum design and other teaching links.
: In recent years, road traffic accidents have occurred frequently in China, and the traditional driving safety education model has been difficult to meet the actual demand due to the problems of single form and poor training effect. To cope with these challenges, this study builds an innovative driving simulation safety education system with the help of virtual reality (VR) technology, combined with the theory of man - made traffic accident factors. Based on the Unity3D engine, the system builds a virtual scene based on real road data and realizes traffic flow simulation by using SUMO, which highly reproduces the driving environment and behavior with VR technology. Through the collision detection mechanism, the system sets up violation triggering conditions, adopts customized screen space rendering method to simulate the driver's vision, builds image - based 3D panorama of the accident scene, and cultivates safe driving habits from the cognitive and perceptual perspectives. After actual testing, the system can conduct driving simulation and safety training under different roads, weather and traffic conditions, effectively stimulating the user's enthusiasm for learning and enhancing the awareness of driving safety, which is of high practical value and in line with the conclusion that VR immersive training is more effective in enhancing the user's participation and knowledge memory. In the future, with the continuous innovation of technology, this kind of system is expected to play a more critical role in the field of traffic rules learning.
3D immersive scene generation is a challenging yet critical task in computer vision and graphics. A desired virtual 3D scene should 1) exhibit omnidirectional view consistency, and 2) allow for large-range exploration in complex scene hierarchies. Existing methods either rely on successive scene expansion via inpainting or employ panorama representation to represent large FOV scene environments. However, the generated scene suffers from semantic drift during expansion and is unable to handle occlusion among scene hierarchies. To tackle these challenges, we introduce LayerPano3D, a novel framework for full-view, explorable panoramic 3D scene generation from a single text prompt. Our key insight is to decompose a reference 2D panorama into multiple layers at different depth levels, where each layer reveals the unseen space from the reference views via diffusion prior. LayerPano3D comprises multiple dedicated designs: 1) We introduce a new panorama dataset Upright360 , comprising 9k high-quality and upright panorama images, and finetune the advanced Flux model on Upright360 for high-quality, upright and consistent panorama generation related tasks. 2) We pioneer the Layered 3D Panorama as underlying representation to manage complex scene hierarchies and lift it into 3D Gaussians to splat detailed 360-degree omnidirectional scenes with unconstrained viewing paths. Extensive experiments demonstrate that our framework generates state-of-the-art 3D panoramic scene in both full view consistency and immersive exploratory experience. We believe that LayerPano3D holds promise for advancing 3D panoramic scene creation with numerous applications. More examples please visit our webpage: ys-imtech.github.io/projects/LayerPano3D/
With the rapid advancement and widespread adoption of VR/AR technologies, there is a growing demand for the creation of high-quality, immersive dynamic scenes. However, existing generation works predominantly concentrate on the creation of static scenes or narrow perspective-view dynamic scenes, falling short of delivering a truly 360-degree immersive experience from any viewpoint. In this paper, we introduce TiP4GEN, an advanced text-to-dynamic panorama scene generation framework that enables fine-grained content control and synthesizes motion-rich, geometry-consistent panoramic 4D scenes. TiP4GEN integrates panorama video generation and dynamic scene reconstruction to create 360-degree immersive virtual environments. For video generation, we introduce a Dual-branch Generation Model consisting of a panorama branch and a perspective branch, responsible for global and local view generation, respectively. A bidirectional cross-attention mechanism facilitates comprehensive information exchange between the branches. For scene reconstruction, we propose a Geometry-aligned Reconstruction Model based on 3D Gaussian Splatting. By aligning spatial-temporal point clouds using metric depth maps and initializing scene cameras with estimated poses, our method ensures geometric consistency and temporal coherence for the reconstructed scenes. Extensive experiments demonstrate the effectiveness of our proposed designs and the superiority of TiP4GEN in generating visually compelling and motion-coherent dynamic panoramic scenes.
Novel view synthesis (NVS) from a single image is highly ill-posed due to large unobserved regions, especially for views that deviate significantly from the input. While existing methods focus on consistency between the source and generated views, they often fail to maintain coherence and correct view alignment across long-range or looped trajectories. We propose a model that addresses this by decomposing single-view NVS into a 360-degree scene extrapolation followed by novel view interpolation. This design ensures long-term view and scene consistency by conditioning on keyframes extracted and warped from a generated panoramic representation. In the first stage, a panorama diffusion model learns the scene prior from the input perspective image. Perspective keyframes are then sampled and warped from the panorama and used as anchor frames in a pre-trained video diffusion model, which generates novel views through a proposed spatial noise diffusion process. Compared to the prior work, our method produces globally consistent novel views-even in loop-closure scenarios, while enabling flexible camera control. Experiments on diverse scene datasets demonstrate that our approach outperforms existing methods in generating coherent views along user-defined trajectories. Our implementation is available at https://github.com/YiGuYT/LookBeyond.
Creating interactive 3D scenes often requires technical expertise and significant time, limiting accessibility for non-experts. To address this, we present DreamCraft, a VR system enabling users to intuitively generate and edit interactive 3D environments from panoramas without professional skills. DreamCraft supports panorama generation, interactive object selection, panorama editing, and 3D reconstruction. By combining techniques like 3D Gaussian Splatting (3DGS), object segmentation, and 2D-to-3D conversion, it streamlines immersive scene creation. A user study confirmed its usability, ease of learning, and creative potential, positioning DreamCraft as a step toward accessible 3D content creation.
Since the 21st century, with the continuous development and progress of science and technology, the possibility of fire danger is also rising, the requirements of public fire safety management have also been improved accordingly, and the problems in its fire safety management work make it difficult to meet the management requirements. The selection of firefighters has also become the top priority of management, good psychological quality of firefighters can keep calm in the face of fire explosions and various emergencies, and can make the most correct choice to complete the rescue and fire fighting work. In this paper, a building in a certain location is selected as the location to simulate the scene of fire. Through human-computer interaction to judge the psychological quality of firefighters. First use major modeling software such as 3ds Max, Adobe Fusecc, or find material in the Unity3D store to create models of fire scenes and people and import them into Unity3D. Using Unity3D's own particle system component to build flame models, adding collision body components to all models to detect their collision detection mechanism and the mechanism of flame spreading combustion. A Scenario-based Communication Ability Evaluation System Based on Traditional Cognitive Behavior Measurement and Advanced Eye Movement, EEG and NIR Acquisition Techniques.
Generating realistic 3D scenes from text is crucial for immersive applications like VR, AR, and gaming. While text-driven approaches promise efficiency, existing methods suffer from limited 3D-text data and inconsistent multi-view stitching, resulting in overly simplistic scenes. To address this, we propose PSGS, a two-stage framework for high-fidelity panoramic scene generation. First, a novel two-layer optimization architecture generates semantically coherent panoramas: a layout reasoning layer parses text into structured spatial relationships, while a self-optimization layer refines visual details via iterative MLLM feedback. Second, our panorama sliding mechanism initializes globally consistent 3D Gaussian Splatting point clouds by strategically sampling overlapping perspectives. By incorporating depth and semantic coherence losses during training, we greatly improve the quality and detail fidelity of rendered scenes. Our experiments demonstrate that PSGS outperforms existing methods in panorama generation and produces more appealing 3D scenes, offering a robust solution for scalable immersive content creation.
With the continuous development of society, the trend of digitally enabled online teaching is becoming more and more obvious. In order to ensure the quality of online teaching and add fun to education, teacher and student teaching should be combined with virtual scenes. This paper provides a simple example of a virtual classroom for teachers and students by introducing the basic operation of Unity3D engine, the design and construction of conference room scenes, and the implementation of drawing interaction functions and mobile devices porting and script editors to explore the possibility of adding a new form of teaching in 3D virtual space. The test results in this study show that the virtual scene can improve the interactive experience and bring immersion to users, which has some practical significance.
Digital cultural center is the digital form of physical cultural center. The research on pertinent patented technologies reveals that 3D panorama is moving from rapid development stage to technology maturity stage and that the chance of making technical breakthrough is high. This paper has proposed a scene construction system for digital cultural centers. The system, which is dynamic, efficient and reliable, integrates modules for image acquisition planning, image data transmission and storage, image stitching, and stitching quality evaluation and guarantee, and has a large scope of image acquisition.
This research develops an effective and precise collision detection (CD) algorithm for real-time simulation in virtual environments such as computer graphics, realistic and immersive virtual reality (VR), augmented reality (AR) and physical-based simulation within an enhanced algorithm for object collision detection in 3D geometry. We describe an improved algorithm through a comparison in the application of a central processing unit (CPU) and graphics processing units (GPU). Although leveraging CPU for computational speed improvements has gained significant recognition in recent years, this study distinguishes by tracking 3D geometry bounding volume hierarchy (BVH) constructed in a spatial decomposition structure with a focus on Octree-based Axis-Aligned Bounding Box (AABB) structure in 3D scene to compute collision detection to swiftly reject disjoint objects and minimize the number of triangle primitives that need to be processed and then the Möller method is utilized to compute precise triangle primitives, further enhancing the efficiency and precision of the collision detection process. This approach is also designed to implement computation with GPU which utilizes the high-level shader language (HLSL) programming language on the compute shader Unity3D. AABB is structured as the maximum and minimum hexahedron enclosing an object that is parallel to the coordinate axis. Otherwise, GPU computational technique is a crucial method for further enhancing the object’s performance. The proposed method utilizes Octree AABB-based GPU parallel processing to reduce the computational load of real-time collision detection simulations and to handle multiple computations simultaneously. Comparative performance evaluations demonstrate that our GPU-accelerated framework consistently reaches the fastest collision detection times from 1.01 to 45.62 times, respectively.
3D panorama synthesis is a promising yet challenging task that demands high-quality and diverse visual appearance and geometry of the generated omnidirectional content. Existing methods leverage rich image priors from pre-trained 2D foundation models to circumvent the scarcity of 3D panoramic data, but the incompatibility between 3D panoramas and 2D single views limits their effectiveness. In this work, we demonstrate that by applying multi-plane synchronization to the operators from 2D foundation models, their capabilities can be seamlessly extended to the omnidirectional domain. Based on this design, we further introduce DreamCube, a multi-plane RGB-D diffusion model for 3D panorama generation, which maximizes the reuse of 2D foundation model priors to achieve diverse appearances and accurate geometry while maintaining multi-view consistency. Extensive experiments demonstrate the effectiveness of our approach in panoramic image generation, panoramic depth estimation, and 3D scene generation.
Deep learning models performing complex tasks require the support of datasets. With the advancement of virtual reality technology, the use of virtual datasets in deep learning models is becoming more and more widespread. Indoor scenes represents a significant area of interest for the application of machine vision technologies. Existing virtual indoor datasets exhibit deficiencies with regard to camera poses, resulting in problems such as occlusion, object omission, and objects having too small of a proportion of the image, and perform poorly in the training for object detection and simultaneous localization and mapping (SLAM) tasks. Aiming at the problems regarding the capacity of cameras to comprehensively capture scene objects, this study presents an enhanced algorithm based on rapidly exploring random tree star (RRT*) for the generation of camera poses in a 3D indoor scene. Meanwhile, in order to generate multimodal data for various deep learning tasks, this study designs an automatic image acquisition module under the Unity3D platform. The experimental results from running the model on several mainstream virtual indoor datasets—such as 3D-FRONT and Hypersim—indicate that the image sequences generated in this study show enhancements in terms of object capture rate and efficiency. Even in cluttered environments such as those in SceneNet RGB-D, the object capture rate remains stable at around 75%. Compared with the image sequences from the original datasets, those generated in this study achieve improvements in the object detection and SLAM tasks, with increases of up to approximately 30% in mAP for the YOLOv10 object detection task and up to approximately 10% in SR for the ORB-SLAM algorithm.
3D indoor semantic scene reconstruction from 2D images is challenging as it requires both scene understanding and object reconstruction. Compared to perspective images, panoramas provide larger field of view and carry more scene information. In this paper, to reconstruct the 3D indoor semantic scene from a single panorama image, we propose a pipeline that jointly learns to predict the 3D scene layout, complete the object shapes and reconstruct the full scene point cloud. Experiments on the Stanford 2D-3D dataset demonstrate the generality and suitability of the proposed method.
We describe a system that automatically extracts 3D geometry of an indoor scene from a single 2D panorama. Our system recovers the spatial layout by finding the floor, walls, and ceiling; it also recovers shapes of typical indoor objects such as furniture. Using sampled perspective sub-views, we extract geometric cues (lines, vanishing points, orientation map, and surface normals) and semantic cues (saliency and object detection information). These cues are used for ground plane estimation and occlusion reasoning. The global spatial layout is inferred through a constraint graph on line segments and planar superpixels. The recovered layout is then used to guide shape estimation of the remaining objects using their normal information. Experiments on synthetic and real datasets show that our approach is state-of-the-art in both accuracy and efficiency. Our system can handle cluttered scenes with complex geometry that are challenging to existing techniques.
The reconstruction of immersive and realistic 3D scenes holds significant practical importance in various fields of computer vision and computer graphics. Typically, immersive and realistic scenes should be free from obstructions by dynamic objects, maintain global texture consistency, and allow for unrestricted exploration. The current mainstream methods for image-driven scene construction involves iteratively refining the initial image using a moving virtual camera to generate the scene. However, previous methods struggle with visual discontinuities due to global texture inconsistencies under varying camera poses, and they frequently exhibit scene voids caused by foreground-background occlusions. To this end, we propose a novel layered 3D scene reconstruction framework from panoramic image, named Scene4U. Specifically, Scene4U integrates an open-vocabulary segmentation model with a large language model to decompose a real panorama into multiple layers. Then, we employs a layered repair module based on diffusion model to restore occluded regions using visual cues and depth information, generating a hierarchical representation of the scene. The multi-layer panorama is then initialized as a 3D Gaussian Splatting representation, followed by layered optimization, which ultimately produces an immersive 3D scene with semantic and structural consistency that supports free exploration. Scene4U outperforms state-of-the-art method, improving by 24.24% in LPIPS and 24.40% in BRISQUE, while also achieving the fastest training speed. Additionally, to demonstrate the robustness of Scene4U and allow users to experience immersive scenes from various landmarks, we build WorldVista3D dataset for 3D scene reconstruction, which contains panoramic images of globally renowned sites. The implementation code and dataset will be made publicly available.
The aim of this study is to better enable students to grasp the complex material science involved in engine casting, address various engineering challenges, gain precise control of the casting process, strengthen their practical skills, and enhance the effectiveness of teaching engine forging. Firstly, on the basis of real engine forging production, complete 3D modeling of each production line is carried out using the 3dsMax software. Then, the finished models are imported into the virtual simulation software Unity3D to arrange production layouts and perform interactive operations. Lastly, a user interface is designed for the built scene, and the software release is completed. Feedback from users indicates that the software designed in this project serves as a valuable reference and educational tool for students learning about engine casting and related knowledge. However, the complexity of the actual production process was simplified, and due to certain hardware and resource limitations, there remains a gap between the simulation and real production. Future updates and improvements to the software will be made using technical methods based on the feedback received.
This paper discusses the application of virtual simulation in teaching and training, focusing on the analysis of the advantages and characteristics of database and Unity3D interconnection, combined with the status quo of real-world teaching, the use of 3ds Max modeling, Unity 3D virtual reality engine scene building, MYSQL database information management, exploring the development of virtual training software, which to a certain extent compensates for the shortcomings of the current stage of real-world teaching, and provides a more realistic, interactive and personalized learning experience for the trainees in teaching and training. In the teaching and training for the students to provide a more realistic, interactive and personalized learning experience, and a certain type of flight control system virtual training software as an example of practice.
This paper introduces the method of using virtual simulation technology and Unity3D engine to design virtual simulation system of beef cattle segmentation. Through 3dmax modeling and an interactive interface, students can learn the anatomy and muscle tissue of beef cattle in a virtual environment, reducing operational costs and risks, and providing more practice opportunities and feedback. The system design includes scene construction, animation, muscle tissue modeling and other functions, and enhances user experience through button interaction and multi-device support, providing a new teaching method for the education industry.
The creation of complex 3D scenes tailored to user specifications has been a tedious and challenging task with traditional 3D modeling tools. Although some pioneering methods have achieved automatic text-to-3D generation, they are generally limited to small-scale scenes with restricted control over the shape and texture. We introduce SceneCraft, a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences provided by users. Central to our method is a rendering-based technique, which converts 3D semantic layouts into multi-view 2D proxy maps. Furthermore, we design a semantic and depth conditioned diffusion model to generate multi-view images, which are used to learn a neural radiance field (NeRF) as the final scene representation. Without the constraints of panorama image generation, we surpass previous methods in supporting complicated indoor space generation beyond a single room, even as complicated as a whole multi-bedroom apartment with irregular shapes and layouts. Through experimental analysis, we demonstrate that our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality. Code and more results are available at: https://orangesodahub.github.io/SceneCraft
This paper presents a novel method of reconstructing indoor scenes, both structures and objects, by a single panorama photo. The method combines room structure estimation, furniture detection, models selection, as well as 3D positions reasoning. Compare with others, our preliminary results show this method could get almost the same performance with a simpler procedure.
No abstract available
Diffusion-based methods have achieved remarkable achievements in 2D image or 3D object generation, however, the generation of 3D scenes and even $360^{\circ}$ images remains constrained, due to the limited number of scene datasets, the complexity of 3D scenes themselves, and the difficulty of generating consistent multi-view images. To address these issues, we first establish a large-scale panoramic video-text dataset containing millions of consecutive panoramic keyframes with corresponding panoramic depths, camera poses, and text descriptions. Then, we propose a novel text-driven panoramic generation framework, termed DiffPano, to achieve scalable, consistent, and diverse panoramic scene generation. Specifically, benefiting from the powerful generative capabilities of stable diffusion, we fine-tune a single-view text-to-panorama diffusion model with LoRA on the established panoramic video-text dataset. We further design a spherical epipolar-aware multi-view diffusion model to ensure the multi-view consistency of the generated panoramic images. Extensive experiments demonstrate that DiffPano can generate scalable, consistent, and diverse panoramic images with given unseen text descriptions and camera poses.
The increasing demand for augmented and virtual reality applications has highlighted the importance of crafting immersive 3D scenes from a simple single-view image. However, due to the partial priors provided by single-view input, existing methods are often limited to reconstruct low-consistency 3D scenes with narrow fields of view from single-view input. These limitations make them less capable of generalizing to reconstruct immersive scenes. To address this problem, we propose ExScene, a two-stage pipeline to reconstruct an immersive 3D scene from any given single-view image. ExScene designs a novel multimodal diffusion model to generate a high-fidelity and globally consistent panoramic image. We then develop a panoramic depth estimation approach to calculate geometric information from panorama, and we combine geometric information with high-fidelity panoramic image to train an initial 3D Gaussian Splatting (3DGS) model. Following this, we introduce a GS refinement technique with 2D stable video diffusion priors. We add camera trajectory consistency and color-geometric priors into the denoising process of diffusion to improve color and spatial consistency across image sequences. These refined sequences are then used to fine-tune the initial 3DGS model, leading to better reconstruction quality. Experimental results demonstrate that our ExScene achieves consistent and immersive scene reconstruction using only single-view input, significantly surpassing state-of-the-art baselines.
ABSTRACT In order to gain a deeper understanding of the mechanisms that shape the human sense of place, more and more geographical studies are beginning to combine auditory and visual perception. However, majority of them combine acoustic environment with limited-view landscape images, ignoring the impact of space on the formation of acoustic environment and auditory perception. In fact, as scene sounds are multidirectional and propagate in 3D space, combining acoustic environment with visually restricted images will omit spatial and visual soundscape information. We developed a Soundscape-to-Panorama model that generates landscape panoramas from audio data to solve this problem. This model offers richer visual cues compared to the Soundscape-to-Image model. It enables a deeper exploration of spatial and place information within scene audio and demonstrates excellent performance in both task-specific and general evaluations. From the perspective of geographical research, our study is the first to integrate panorama generation with acoustic environment in the understanding of place perception. We have broken through the limitations of previous research, observed the importance of space in acoustic environment research, and opened up a new path for future acoustic environment research.
Text-driven 3D indoor scene generation holds broad applications, ranging from gaming and smart homes to AR/VR applications. Fast and high-fidelity scene generation is paramount for ensuring user-friendly experiences. However, existing methods are characterized by lengthy generation processes or necessitate the intricate manual specification of motion parameters, which introduces inconvenience for users. Furthermore, these methods often rely on narrow-field viewpoint iterative generations, compromising global consistency and overall scene quality. To address these issues, we propose FastScene, a framework for fast and higher-quality 3D scene generation, while maintaining the scene consistency. Specifically, given a text prompt, we generate a panorama and estimate its depth, since the panorama encompasses information about the entire scene and exhibits explicit geometric constraints. To obtain high-quality novel views, we introduce the Coarse View Synthesis (CVS) and Progressive Novel View Inpainting (PNVI) strategies, ensuring both scene consistency and view quality. Subsequently, we utilize Multi-View Projection (MVP) to form perspective views, and apply 3D Gaussian Splatting (3DGS) for scene reconstruction. Comprehensive experiments demonstrate FastScene surpasses other methods in both generation speed and quality with better scene consistency. Notably, guided only by a text prompt, FastScene can generate a 3D scene within a mere 15 minutes, which is at least one hour faster than state-of-the-art methods, making it a paradigm for user-friendly scene generation.
We present a novel virtual staging application for kitchen remodeling from a single panorama. To ensure the realism of the virtual rendered scene, we capture real-world High Dynamic Range (HDR) panoramas and recover the absolute scene radiance for high-quality scene relighting. Our application pipeline consists of three key components: (1) HDR photography for capturing paired indoor and outdoor panoramas, (2) automatic kitchen layout generation with new kitchen components, and (3) an editable rendering pipeline that flexibly edits scene materials and relights the new virtual scene with global illumination. Additionally, we contribute a novel Pano-Pano HDR dataset with 141 paired indoor and outdoor panoramas and present a low-cost photometric calibration method for panoramic HDR photography.
最终分组展示了 Unity3D 漫游游戏研究的立体结构:技术层由 AIGC 驱动的场景自动化生成和底层交互算法(碰撞、感知)支撑;应用层涵盖了教育实训、数字孪生、文化遗产保护等多元化领域;而在社会价值层,研究已延伸至特殊人群辅助与心理干预。整体趋势正从“单一的视觉漫游”向“多模态感知、智能化生成、跨行业融合”的深层次演进。