科研人员需求
国家战略导向与科技评价评价制度需求
该组文献探讨了宏观与中观层面的科研环境。包括国家与区域人才战略的顶层设计、科技自立自强背景下的政策演进,以及高校、企业和农业科研单位在评价机制、人力资源管理和人才生态系统构建方面的迫切需求。
- 教育、科技、人才一体化发展的路径探索(吴 彬, 2024, 社会科学前沿)
- 我国不同地区教育科技人才一体化发展政策对比研究(李彩霞, 杨 硕, 王钰宁, 李贺南, 2026, 教育进展)
- 科技人才吸引力的区域比较研究——以上海市为例(黄也展, 2022, 应用数学进展)
- 东北地区科技人才对比与前瞻(陈 舒, 杜敏智, 2023, 现代管理)
- 科技自立自强目标下科技人才发展评价指标体系构建研究(彭诗懿, 何增华, 2024, 社会科学前沿)
- “双一流”建设背景下高校教学科研人员评价制度改革的思考(裴存原, 倪世兵, 孙盼盼, 张冬梅, 2024, 教育进展)
- 中国企业科技人才生态系统建设研究(董建华, 王福英, 2024, 可持续发展)
- 新质生产力赋能湖北省科技人才培养体系变革的对策研究(刘 博, 2026, 社会科学前沿)
- 试论加强国有企业科技人才队伍建设的途径(徐志远, 2022, 服务科学和管理)
- 我国高校人力资源管理模式的现状与对策——以教研人员为视角的分析(刘文婷, 2023, 社会科学前沿)
- 关于农业科研单位科技人才建设的思考(王尚德, 周 娟, 郑志琴, 2019, 可持续发展)
研究生与青年科研人才的职业成长与心理需求
该组文献聚焦于科研主体的微观需求。重点研究了硕士、博士及青年科研人员在创新能力培养、自我决定理论驱动下的科研动机、心理认同(获得感)以及在医学、国防等特定行业中的职业技能提升与成长路径。
- 浅谈对医学研究生科研能力的培养与管理(徐 丹, 吴 丹, 2023, 教育进展)
- 基于自我决定理论的高校实验技术人才队伍建设的探究(杨 超, 2025, 教育进展)
- 硕士研究生科研获得感对知识共享意愿、科研压力的影响(杨 剑, 陈永进, 董超华, Unknown Journal)
- 青年科研人才队伍培养的探索与实践(周树雄, 2020, 管理科学与工程)
- 新军事变革下国防科技人才培养需求及促进措施研究(蒋薇, 苏绍璟, 洪华杰, 郭晓俊, 左震, 2021, 教育进展)
- 我国研究生科研创新能力影响因素及提升—以会计学硕士研究生为例(李淑珍, 2014, 教育进展)
- 科研院所提高青年思想工作实效的路径(褚宏观, 宋俪超, 2025, 社会科学前沿)
科研组织模式创新与机构管理效能需求
这组文献关注研究机构内部的运行机制。涉及科研人员对“有组织科研”模式的参与需求、科研经费的PDCA循环管理、实验技术队伍的专业化管理,以及知识生产模式转型下的协作机制创新。
- 高校教师参与有组织科研的现实意义与提升路径(蒋松言, 2025, 教育进展)
- 基于PDCA循环的高校科研经费全过程管理(鞠 丹, 2022, 现代管理)
- Research Agenda-Setting in Medicine: Shifting from a Research-Centric to a Patient-Centric Approach(Ania Korsunska, 2021, Diversity, Divergence, Dialogue)
- 知识生产模式II型演进中的科研组织化创新——新医科交叉学科平台建构的实践逻辑(孙郡宏, 曹茂甲, 2025, 教育进展)
高性能计算、云服务与算力资源调度需求
这组文献针对数据密集型科学背景,探讨科研人员对高性能计算(HPC)、量子计算资源集成、云工作流调度、Jupyter远程服务平台以及大规模科学装置(如HEPS)计算支撑环境的需求。
- HEPS scientific computing platform design for interactive data analysis scenarios(Qingbao Hu, Wei Zheng, Xiaofei Yan, Binbin Li, Yaosong Cheng, Jiping Xu, 2025, EPJ Web of Conferences)
- Integrating quantum computing resources into scientific HPC ecosystems(Thomas Beck, Alessandro Baroni, R. Bennink, Gilles Buchs, Eduardo Antonio Coello Pérez, M. Eisenbach, Rafael Ferreira da Silva, Muralikrishnan Gopalakrishnan Meena, K. Gottiparthi, Peter Groszkowski, Travis S. Humble, Ryan Landfield, Ketan Maheshwari, Sarp Oral, Michael A. Sandoval, Amir Shehata, In-Saeng Suh, Chris Zimmer, 2024, Future Gener. Comput. Syst.)
- Workflow as a Service Broker in Cloud Environment: A Systematic Literature Review(Saeid Abrishami, Farid Zandi, Alireza Nourbakhsh, 2025, ArXiv)
- Developing Real-Time Services with High Performance and Cloud Security Enabled Framework via Adjusted TLS v1.3 for On-Demand HIPA Activity Calculations(Mei-Chi Chang, V. Talanov, J. Snuverink, Daniela Kiselev, 2023, 2023 10th International Conference on Future Internet of Things and Cloud (FiCloud))
- Evolution of the HEPS Jupyter-based remote data analysis System(Zhibing Liu, Qiulan Huang, H. Tian, Yu Hu, Jingyan Shi, R. Du, Hao Hu, Lu Wang, Fazhi Qi, 2021, EPJ Web of Conferences)
- A Benchmarking Framework for Hybrid Quantum–Classical Edge-Cloud Computing Systems(Guoxing Yao, Lav Gupta, 2025, Applied Sciences)
- OPSA: an optimized prediction based scheduling approach for scientific applications in cloud environment(Gurleen Kaur, Anju Bala, 2021, Cluster Computing)
- Building a collaborative cloud platform to accelerate heart, lung, blood, and sleep research(S. Ahalt, P. Avillach, Rebecca R. Boyles, K. Bradford, Steven Cox, Brandi N. Davis-Dusenbery, R. L. Grossman, A. Krishnamurthy, Alisa K. Manning, B. Paten, A. Philippakis, I. Borecki, S. Chen, Jon Kaltman, Sweta Ladwa, Chip Schwartz, Alastair Thomson, Sarah Davis, Alison Leaf, Jessica Lyons, Elizabeth Sheets, J. Bis, M. Conomos, Alessandro Culotti, T. Desain, J. DiGiovanna, Milan Domazet, S. Gogarten, A. Gutiérrez-Sacristán, Tim Harris, Benjamin D. Heavner, D. Jain, Brian O'Connor, K. Osborn, D. Pillion, Jacob Pleiness, Kenneth Rice, Garrett Rupp, Arnaud Serret-Larmande, Albert Smith, Jason Stedman, A. Stilp, Teresa Barsanti, John B. Cheadle, C. Erdmann, Brandy Farlow, Allie Gartland-Gray, Julie Hayes, Hannah Hiles, Paul Kerr, C. Lenhardt, Tom Madden, Joanna O. Mieczkowska, Amanda Miller, Patricia A. Patton, M. Rathbun, Stephanie Suber, J. Asare, 2023, Journal of the American Medical Informatics Association : JAMIA)
数字化工具、知识管理与科研门槛降低需求
这组文献关注具体科研工具的应用。包括电子实验记录本(ELN)、知识图谱与本体构建、软件依赖管理(Docker)、特定领域数据采集工具,以及通过API化插件和机器学习教程降低跨学科研究的技术门槛。
- Mapping and understanding Earth: Open access to digital geoscience data and knowledge supports societal needs and UN sustainable development goals(Klaus Hinsby, Philippe Négrel, D. D. de Oliveira, R. Barros, Guri Venvik, A. Ladenberger, Jasper Griffioen, Kris Piessens, P. Calcagno, Gregor Götzl, H. Broers, L. Gourcy, S. van Heteren, Julie Hollis, E. Poyiadji, D. Čápová, J. Tulstrup, 2024, Int. J. Appl. Earth Obs. Geoinformation)
- Data infrastructure for integrating clinical data in the large-scale international ORCHESTRA cohort: from data import to federated analysis(M. Puskaric, Hammam Abu Attieh, F. Prasser, R. Gusinow, Chiara Dellacasa, Elisa Rossi, Juan Mata Naranjo, L. Canziani, A. Górska, Jan Hasenauer, 2024, 2024 IEEE International Conference on Big Data (BigData))
- AI-VERDE: A Gateway for Egalitarian Access to Large Language Model-Based Resources For Educational Institutions(P. Mithun, Enrique Noriega-Atala, Nirav Merchant, Edwin Skidmore, 2025, ArXiv)
- Physics Community Needs, Tools, and Resources for Machine Learning(P. Harris, E. Katsavounidis, W. McCormack, D. Rankin, Yongbin Feng, A. Gandrakota, C. Herwig, B. Holzman, K. Pedro, Nhan Tran, Tingjun Yang, J. Ngadiuba, Michael W. Coughlin, S. Hauck, Shih-Chieh Hsu, E. Khoda, De-huai Chen, M. Neubauer, Javier Mauricio Duarte, G. Karagiorgi, Miaoyuan Liu, 2022, ArXiv)
- Design and implementation of an instrument control platform for future beamline experiments at SPring-8(K. Nakajima, K. Motomura, T. Hiraki, K. Nakada, T. Sugimoto, K. Watanabe, T. Osaka, H. Yamazaki, H. Ohashi, Y. Joti, T. Hatsui, M. Yabashi, 2022, Journal of Physics: Conference Series)
- Considerations for implementing electronic laboratory notebooks in an academic research environment(S. Higgins, Akemi A Nogiwa-Valdez, Molly M. Stevens, 2022, Nature Protocols)
- Ontology Creation Process in Knowledge Management Support System for a Research Institute(C. Chudzian, 2023, Journal of Telecommunications and Information Technology)
- DockerPedia: A Knowledge Graph of Software Images and Their Metadata(Maximiliano Osorio, Carlos Buil-Aranda, Idafen Santana-Pérez, D. Garijo, 2022, Int. J. Softw. Eng. Knowl. Eng.)
- Modbus Data Provider for Automation Researcher Using C#(Sudipto Chakraborty, Sreeramana Aithal, 2023, International Journal of Case Studies in Business, IT, and Education)
- Web of venom: exploration of big data resources in animal toxin research(G. Zancolli, B. V. von Reumont, Gregor Anderluh, F. Çalışkan, M. Chiusano, Jacob Fröhlich, E. Hapeshi, Benjamin-Florian Hempel, M. Ikonomopoulou, F. Jungo, Pascale Marchot, Tarcisio Mendes de Farias, M. V. Modica, Yehu Moran, A. Nalbantsoy, Jan Procházka, Andrea Tarallo, Fiorella Tonello, Rui Vitorino, M. Zammit, Agostinho Antunes, 2024, GigaScience)
- PymolFold: A PyMOL Plugin for API-driven Structure Prediction and Quality Assessment(Yifan Deng, Jinyuan Sun, 2025, bioRxiv)
- A User’s Guide to Machine Learning for Polymeric Biomaterials(Travis A. Meyer, César Ramírez, Matthew Tamasi, A. Gormley, 2022, ACS Polymers Au)
本次合并将科研人员的需求细化为五个核心维度:一是制度导向需求,强调公平多元的评价体系与国家政策协同;二是职业成长需求,聚焦青年人才与研究生的心理认同及能力提升;三是组织管理需求,关注有组织科研模式下的流程优化;四是算力资源需求,强调高性能计算、量子计算与云端的便捷调度;五是工具应用需求,侧重于通过数字化工具、知识管理及降门槛技术(如AI插件)提升科研效率。整体呈现出从宏观保障到微观工具支承的全方位需求图景。
总计42篇相关文献
我国高校当前人力资源管理模式存在管理思想不明确、未进行有效的人力资源配置、缺乏对高校人力资源的培训开发机制、人才激励机制不健全等问题。针对性改革势在必行。通过对英国、美国、日本三个国家高校人力资源管理模式的分析,为我国高校人力资源管理模式改革提供借鉴,三个国家高校的管理理念和管理制度都体现了对教研人员个人发展的重视,这一点在英国高校体现得尤为明显。美国高校管理以公正、公开且严格的制度为特点,我国高校需要在理念转变的基础上创新制度,建设更加公正公开、严格细致的考核和激励机制。日本高校对教研人员的激励更为丰厚,开放式的人员流动制度也值得我国高校借鉴,可用于缓解人力资源配置短缺的问题。此外,对其他管理模式的借鉴也要立足于我国的高等教育现状,当今时代下,高校发展离不开高素质的教研人员队伍。构建以人为本的高校人力资源管理模式,要求转变传统观念,持续完善管理机制,充分调动教研人员积极性,开发教研人员潜力,促进高校的长远发展。
高校是基础研究主力军和重大科技突破策源地,肩负着人才培养和科技创新的双重任务。加快变革高校科研范式和组织模式,强化有组织科研是高校服务国家和区域战略需求、实现高水平科技自立自强的关键。高校教师作为高校科研工作承担主体与核心力量,参与有组织科研有利于扩充专业知识,确定研究方向;引进学术资源,提高科研效率;促进青年教师专业成长,充分发挥领军人才的战略科学家作用。因此,为提升高校教师参与有组织科研工作的质量与水平,教师应强化政策理解与战略对接,调整科研方向与组织模式;高校应健全管理体制与评价机制,推进人才培养与平台建设,为加快建设世界重要人才中心和创新高地提供有力支撑。
近年来,中国高等教育致力于“双一流”建设,提升整体实力与国际竞争力是当务之急。教师作为教育的核心,其绩效评价体系的科学性和有效性对高校发展至关重要。面对教育国际化与质量提升的双重挑战,传统评价体系已不能适应新时代。本文探讨了“双一流”战略对教师评价的新要求,深入剖析了当前存在的问题,如单一指标、忽视个体差异和创新能力等。为此,本文提出改革建议,包括多元化评价指标、教学科研并重、终身学习机制以及动态调整机制,旨在优化教学科研人员评价制度,促进专业发展和创新,提升教学质量与科研水平,为高等教育的发展提供有力支持。
长期以来,我国农业科研单位人才建设已取得了长足进步,为我国农业科研发展提供了重要的支撑,但目前仍存在人才结构不合理,人才队伍不稳定,科研素养有待提高等问题。分析原因与科研单位的定位和人才培养计划不健全,科研人员思想认识、知识更新不到位,科研单位评价机制不完善等因素有关。文章从加强顶层设计、谋划长远,把好入口、激发内因,鼓励创新、提振精神,改善条件、稳定队伍,优化评价、激发活力等方面提出了一套建议措施,目的是让人才的创新创造活力充分迸发,使各方面的人才各得其所、尽展其长,为我国农业科技的健康发展与长远振兴提供人才保障和智力支持。
K公司历来注重青年人才队伍培养工作,制定了相对较完善的人才培养政策及激励措施,基于此,K公司研究院根据自身特点,创新人才培养机制,推动青年科研人才快速成长成才,构建层次鲜明的科研人才队伍,形成了以青年“号”、“手”、“队”以及科研项目组为基本单元的青年科研创新团队,在实际生产中起到了重要的作用,为公司稳步发展提供了较有力的核心技术支撑。
现代性条件下,日常生活的碎片化与同质化导致青年的自我认同危机逐步加深,但在此过程中也展现出自我建构的积极性。科研院所青年思想政治工作要深入掌握当代青年的这个基本特征,结合本单位青年所遇到的具体问题,有针对性地改进工作路径。具体来说,思想政治工作本质在于以正确价值观引领青年。这就要求科研单位担当好青年的知心人和热心人的角色,并从增强思想政治工作与青年个人的关切度、利用显性教育与隐性教育相结合的方式,增强价值引领的获得感,着力解决好青年科研人员在科研业务方面的困境,增强日常生活保障度等方面着手,着力化解青年自我认同危机、保障其本体性安全。
科技创新本质是人的创造性活动,人才资源是国家发展的第一资源,也是创新活动中最为活跃、最为积极的因素。本文旨在通过对全国、东北三省、东北三省的国家高新区和高新技术企业的科技人员、R&D人员情况进行数据挖掘和对比分析,为东北地区科技人才恢复往日生机提供详实、科学的科技信息,激发东北地区科技创新人才活力,形成开放包容的科技创新人才服务体系,使多年的外流趋势得到缓解,打破东北振兴的困难局面,为东北全面振兴构建丰富的人才蓄水池。
近年来,在“推进健康中国建设”的国家战略实施过程中,新时代的医疗人才培养起了至关重要的作用,医学研究生科研能力的管理是我国医学教育发展与创新的重要方面。本文围绕医学研究生培养的背景与现状,阐述了医学研究生在科研能力提升方面的方式方法,进而提出健全医院科研管理体系的构成,为医学科研工作管理者在医学研究生的科研能力培养方面提供一定参考依据。
本文提出研究生的科研创新能力从根本上决定着研究生的教育质量,进而以会计学研究生培养为例分析我国研究生教育的现状,从国家、学校和研究生个人三个层面研究影响我国研究生科研创新能力的因素,并进一步提出提升我国研究生科研创新能力的建议。
近年来,国家持续加大对高校科研经费的投入力度,加之高校与社会企事业单位之间的合作交流增多,使得高校的科研经费来源广泛,科研经费金额也出现了跨跃式的增长,科研经费管理成为高校财务管理的核心内容之一,在高校发展中的作用也日益增强。本文通过深入分析高校在科研预算申报、立项、到账认领、核算、预算调整、结题、绩效、审计等管理过程中存在的问题和风险点,运用PDCA循环的全过程管理理念和方法,结合信息化手段,构建和完善科研经费管理架构,优化科研经费管理过程,以实现提高科研经费使用效率的目的。
作为国家高素质人才,硕士研究生科研能力的培养一直是研究生教育的关键内容,而科研能力水平的高低,主要体现在其科研产出上。在科研产出过程中,硕士研究生的需求与感受尚不清楚。本文对316名被试者进行测查,发现硕士研究生的科研获得感包含客观获得认知和价值认同两成分,且科研获得感高的个体在项目开发过程中知识共享意愿强,科研压力低。因此,学校、老师和家长应帮助硕士研究生树立正确的科研认知,并给予其支持以提高科研获得感水平。
随着新一轮科技革命和产业变革深入发展,为企业转型发展提供新的机遇,也带来新的挑战。为保持企业自身高质量发展,对科技人才队伍建设提出了更高要求。本文分析了加强科技人才队伍建设的重要意义,讨论了当前科技人才队伍建设存在的问题和不足,提出了加强科技人才队伍建设的手段和途径。
本文基于“知识生产模式II型”理论,探讨交叉学科平台建设对科研组织形态的重构逻辑,提出纵深发展、横向扩张和内涵式演进的交叉学科平台三维发展模型,并结合典型医学院校平台案例解析其运行路径。从有组织科研的视角出发,针对交叉学科平台建设的实践需求,提出具体的建设路径和策略。通过资源和科研力量的集中整合,推动平台以任务驱动为核心,承担重大科研项目,提升原创性创新能力,同时培养具备跨学科背景的复合型人才,为相关领域的可持续发展提供有力的人才支撑。
在高校高质量发展背景下,实验技术人才队伍已成为高校人才培养和科技创新的关键力量,然而其业务支撑能力与高校发展需求仍有显著差距。通过剖析实验技术人才队伍现存的晋升障碍、考核松散、培训缺乏、管理多头及激励匮乏等问题,并参考部分高校的有效经验,提出以自我决定理论为指导、以提升业务支撑能力为核心、以满足“自主”、“胜任”和“归属”三方面需求为激励手段的实验技术人才队伍建设体系,以保障队伍可持续优质发展,契合高校创新人才培育之需,赋能高校科研实践创新,助力高校高质量发展进阶。
科技人才是经济社会创新发展的核心资源。论文以上海市为例,依据2020年上海市区域面板数据,采用因子分析法,基于实证验证,比较2020年上海市各市辖区科技人才吸引力水平。结果显示:浦东新区科技人才吸引力水平最高;上海中心城区对科技人才的吸引力不具有明显优势;五大新城起到吸引科技人才的重要作用。上海应充分发挥中心城区的辐射作用,做到两翼齐飞和南北转型。
教育科技人才一体化发展是支撑高质量发展的核心驱动力,也是建设教育强国、科技强国、人才强国的关键路径。本文基于政策文本分析,系统梳理我国教育科技人才一体化发展的政策演进与整体特征,对比分析东、中、西及东北地区的差异化实践模式,并提出未来发展方向。研究发现,我国一体化政策历经“分散推进(2012~2022年)–协同起步(2022~2024年)–一体深化(2024年~至今)”三阶段演进,国家层面已形成“教育筑基、科技突破、人才引领”的三维目标体系。区域层面呈现鲜明特色,且各区域在政策目标、实施路径、保障机制与成效上均实现精准适配。未来需通过强化统筹协同、破解区域差距、激活创新动能等破解区域差距与协同壁垒,形成全域协同、开放共赢的一体化发展新格局。
随着全球化和信息化的不断深入,科技创新已成为推动企业乃至国家竞争力的核心动力。企业作为科技创新的主体,其科技人才的培养、吸引和保留成为影响其核心竞争力的关键因素。科技人才生态系统的建设,不仅关系到企业的长远发展,也是国家创新体系建设的重要组成部分。科技人才生态系统的建设涉及多个方面,包括但不限于人才培养机制的构建、人才吸引策略的制定、人才激励与保留策略的优化、跨学科交叉融合的促进、以及与其他创新主体的协同等。通过深入探讨科技人才生态系统的构建与优化,为中国企业提供科技人才管理和发展的系统性策略建议,以期通过科技创新推动企业乃至国家的可持续发展。
科技自立自强已成为国家科技发展的战略支撑,而科技人才是创新发展的核心资源,为适应科技自立自强的目标,科技人才发展评价体系应适时调整。基于科技自立自强目标下的科技人才内涵,从科技人才结构规模、科技人才能力水平、科技人才发展环境三个层面,通过序关系分析法确定指标权重,构建科技人才发展指标体系。有利于提升科技人才独立创新能力,对探索建立适应科技发展规律的政策体系具有重要价值的参考意义。
教育、科技、人才的一体化发展是党在新时期做出的重大决策部署。但在当前,教育、科技、人才还存在融合不顺畅、不深入和发展不充分、不平衡的挑战。由于教育、科技、人才三者本身具有一定独立性,但又是密切相关的三个系统。这种特性决定了统筹推进三者一体化,既不能把三者混为一谈,也不能各管一段。推进教育、科技、人才一体化要坚持党的领导、坚持人民至上、坚持系统观念。从实践路径而言,需要完善顶层设计、加强产学研融合、改革评价机制来进一步推进教育、科技、人才一体化。
新质生产力以其高科技、高效能、高质量特征,正在重塑全球经济格局,也对科技人才培养体系提出了全新要求。湖北作为科教大省,其传统科技人才培养体系在新质生产力的发展中面临诸多全新挑战。本文立足教育、科技、人才一体推进战略,聚焦湖北省“51020”现代产业集群建设背景,深入剖析其在人才供需结构、区域要素配置、产教融合深度及智能化培养范式等方面的现实困境。坚持问题导向,提出以筑牢一体发展根基、激活一体发展动能、重构人才培养范式及革新人才评价生态为核心的变革实践进路,旨在为湖北实现经济社会高质量发展、加快建成中部地区崛起重要战略支点提供理论参考。
军事技术的变革和战争模式的转变对国防科技人才提出了全新的需求,当前急需的国防科技人才按照职能特点和工作对象可分为国防科技创新型人才、国防工程技术型人才和复合指挥型国防人才。针对所需的人才类型,分别从学校、企业和国家层面提出多学科交叉融合、校企合作和终身学习三种促进人才培养的措施。在线教育的便利性和资源的丰富性可以满足人们对于远程学习和个性化学习等方面的要求,在本文提出的人才培养措施中可以起到重要的作用。
Abstract Research increasingly relies on interrogating large-scale data resources. The NIH National Heart, Lung, and Blood Institute developed the NHLBI BioData CatalystⓇ (BDC), a community-driven ecosystem where researchers, including bench and clinical scientists, statisticians, and algorithm developers, find, access, share, store, and compute on large-scale datasets. This ecosystem provides secure, cloud-based workspaces, user authentication and authorization, search, tools and workflows, applications, and new innovative features to address community needs, including exploratory data analysis, genomic and imaging tools, tools for reproducibility, and improved interoperability with other NIH data science platforms. BDC offers straightforward access to large-scale datasets and computational resources that support precision medicine for heart, lung, blood, and sleep conditions, leveraging separately developed and managed platforms to maximize flexibility based on researcher needs, expertise, and backgrounds. Through the NHLBI BioData Catalyst Fellows Program, BDC facilitates scientific discoveries and technological advances. BDC also facilitated accelerated research on the coronavirus disease-2019 (COVID-19) pandemic.
Though the issue of knowledge management is a hot subject of interest in nowadays market companies, integrated solutions fit to the specific needs of research institutes still require more attention. This paper documents a part of the research activities performed at National Institute of Telecommunications, related to development of research institute knowledge management support system. The ideas lying in the background of the system come from the recent theories of knowledge creation and creativity support and from experience with everyday practice of knowledge management in market companies. Main focus is put here on the issue of creation of a research topics ontology that is meant to be a semantic backbone of the system. Three-stage approach is proposed, aiming at the construction of ontologies for different levels of organizational hierarchy, from individual researcher, through group or unit, up to the whole institute. Created ontologies are linked to knowledge resources and support diverse activities performed at those levels
Traditional approaches to research agenda-setting focus on researchers and their ability to review and synthesize literature, identify gaps, prioritize their ideas, and find the resources to make them a reality. Recent initiatives in medical research have shifted the focus away from the researcher to other stakeholders. Through a series of semi-structured interviews with medical researchers, we illustrate both the traditional researcher-centric as well as the novel patient-centric approaches. The patient-centric approach allows patients to contribute their diverse perspectives and pose unique questions, which can direct more impactful research agenda-setting. This paper provides insights into how medical research agendas are established, what factors impact decision-making and how an innovative use of crowdsourcing can refocus attention on the patient and their needs.
Machine learning (ML) is becoming an increasingly important component of cutting-edge physics research, but its computational requirements present significant challenges. In this white paper, we discuss the needs of the physics community regarding ML across latency and throughput regimes, the tools and resources that offer the possibility of addressing these needs, and how these can be best utilized and accessed in the coming years.
Quantum Computing (QC) offers significant potential to enhance scientific discovery in fields such as quantum chemistry, optimization, and artificial intelligence. Yet QC faces challenges due to the noisy intermediate-scale quantum era's inherent external noise issues. This paper discusses the integration of QC as a computational accelerator within classical scientific high-performance computing (HPC) systems. By leveraging a broad spectrum of simulators and hardware technologies, we propose a hardware-agnostic framework for augmenting classical HPC with QC capabilities. Drawing on the HPC expertise of the Oak Ridge National Laboratory (ORNL) and the HPC lifecycle management of the Department of Energy (DOE), our approach focuses on the strategic incorporation of QC capabilities and acceleration into existing scientific HPC workflows. This includes detailed analyses, benchmarks, and code optimization driven by the needs of the DOE and ORNL missions. Our comprehensive framework integrates hardware, software, workflows, and user interfaces to foster a synergistic environment for quantum and classical computing research. This paper outlines plans to unlock new computational possibilities, driving forward scientific inquiry and innovation in a wide array of research domains.
We present AI-VERDE, a unified LLM-as-a-platform service designed to facilitate seamless integration of commercial, cloud-hosted, and on-premise open LLMs in academic settings. AI-VERDE streamlines access management for instructional and research groups by providing features such as robust access control, privacy-preserving mechanisms, native Retrieval-Augmented Generation (RAG) support, budget management for third-party LLM services, and both a conversational web interface and API access. In a pilot deployment at a large public university, AI-VERDE demonstrated significant engagement across diverse educational and research groups, enabling activities that would typically require substantial budgets for commercial LLM services with limited user and team management capabilities. To the best of our knowledge, AI-Verde is the first platform to address both academic and research needs for LLMs within an higher education institutional framework.
Purpose: Modbus is a popular protocol for data exchange between devices in industrial automation. It has several advantages over other protocols, like noise immunity, long-distance coverage, and easy integration with the microcontroller's serial module. Sometimes we need a device that provides data for our research work. Here we demonstrate a procedure so researchers can create a virtual Modbus client to get Modbus data for their research work. We created a Modbus client in C# language In Visual Studio. We added a couple of modules, Like the Modbus client, serial module, and message display. We added a couple of graphical user interface elements to control the application. The project code is available on GitHub. The researcher can get and customize according to their needs. Design/Methodology/Approach: We created an application in C#. The application has several modules. The main module is the Modbus client. The external device is connected through USB to RS485 converter. When our application starts, the COM object is created. One timer is also started. Its interval is one millisecond. It checks whether the data is reached or not. Once the serial object receives the data, the packet is passed to that Modbus client. The Modbus client starts parsing the received packet. If the packet is OK, then, Extract the command. According to the received command, it created a response packet and added the data of the requested register. Calculate the CRC and add it at the end of the packet. After preparing the packet, send a response back to the master. Findings/Result: Sometimes, the automation researcher does not have the device to provide the research data. Through this research work, we provide a procedure so the researcher can create a virtual Modbus client to provide the data for their research work. So it can be helpful to the researcher to get the data using their working system. We tested it several times with the baud rate of 9600. It is working perfectly without any issues. Researchers can use it for their research work as a software tool. Originality/Value: Several software programs are available to provide the data over the Modbus. Sometimes we need to customize the software according to our requirements. Most of the software is not open source, so here we provide an application so that researchers can optimize and customize the code for their research work so it can provide them with some valuable resources. Paper Type: Experimental-based Research.
Abstract Research on animal venoms and their components spans multiple disciplines, including biology, biochemistry, bioinformatics, pharmacology, medicine, and more. Manipulating and analyzing the diverse array of data required for venom research can be challenging, and relevant tools and resources are often dispersed across different online platforms, making them less accessible to nonexperts. In this article, we address the multifaceted needs of the scientific community involved in venom and toxin-related research by identifying and discussing web resources, databases, and tools commonly used in this field. We have compiled these resources into a comprehensive table available on the VenomZone website (https://venomzone.expasy.org/10897). Furthermore, we highlight the challenges currently faced by researchers in accessing and using these resources and emphasize the importance of community-driven interdisciplinary approaches. We conclude by underscoring the significance of enhancing standards, promoting interoperability, and encouraging data and method sharing within the venom research community.
s surface and subsurface holds immense value for society. This paper highlights the significance of open access to digital geoscience data ranging from the shallow topsoil or seabed to depths of 5 km. Such data play a pivotal role in facilitating endeavours such as renewable geoenergy solutions, resilient urban planning, supply of critical raw materials, assessment and protection of water resources, mitigation of floods and droughts, identification of suitable locations for carbon capture and storage, development of offshore wind farms, disaster risk reduction, and conservation of ecosystems and biodiversity. EuroGeoSurveys, the Geological Surveys of Europe, have worked diligently for over a decade to ensure open access to harmonised digital European geoscience data and knowledge through the European Geological Data Infrastructure (EGDI). EGDI acts as a data and information resource for providing wide-ranging geoscience data and research, as this paper demonstrates through selected research data and information on four vital natural resources: geoenergy, critical raw materials, water, and soils. Importantly, it incorporates near real-time remote and in-situ monitoring data, thus constituting an invaluable up-to-date database that facilitates informed decision-making, policy implementation, sustainable resource management, the green transition, achieving UN Sustainable Development Goals (SDGs), and the envisioned future of digital twins in Earth sciences. EGDI and its thematic map viewer are tailored, continuously enhanced, and developed in collaboration with all relevant researchers and stakeholders. Its primary objective is to address societal needs by providing data for sustainable, secure, and integrated management of surface and subsurface resources, effectively establishing a geological service for Europe. We argue that open access to surface and subsurface geoscience data is crucial for an efficient green transition to a net-zero society, enabling integrated and coherent surface and subsurface spatial planning.
Cloud computing has emerged as a promising platform for running scientific workflows across various domains. Scientists can take advantage of different cloud service models, such as serverful or serverless, to execute workflows based on their specific requirements, along with diverse pricing models like on-demand, reserved, or spot instances to reduce execution costs. However, the challenge of selecting appropriate resources and pricing models, coupled with the orchestration and scheduling of workflow tasks, creates significant complexity for users. To mitigate this burden, Workflow as a Service (WaaS) brokers have been introduced to facilitate workflow execution. In recent years, numerous studies have been published, either directly or indirectly related to this research area, highlighting the need for a comprehensive and systematic review of WaaS brokers to identify key trends and challenges in this field. In this paper, we conduct a Systematic Literature Review (SLR) on WaaS brokers within cloud environments. The SLR employs a thorough 3-tier strategy (database search, backward snowballing, and forward snowballing) to answer five research questions. A total of 74 high-quality articles, published in 43 prestigious venues, are analyzed to derive a taxonomy based on the architecture of WaaS brokers. The articles are classified and surveyed according to this taxonomy, and future research directions for the design and implementation of WaaS brokers are explored. This study provides valuable insights for researchers and developers, helping them identify major trends and issues in the field of WaaS brokers.
As research becomes predominantly digitalized, scientists have the option of using electronic laboratory notebooks to record and access entries. These systems can more readily meet volume, complexity, accessibility and preservation requirements than paper notebooks. Although the technology can yield many benefits, these can be realized only by choosing a system that properly fulfills the requirements of a given context. This review explores the factors that should be considered when introducing electronic laboratory notebooks to an academically focused research group. We cite pertinent studies and discuss our own experience implementing a system within a multidisciplinary research environment. We also consider how the required financial and time investment is shared between individuals and institutions. Finally, we discuss how electronic laboratory notebooks fit into the broader context of research data management. This article is not a product review; it provides a framework for both the initial consideration of an electronic laboratory notebook and the evaluation of specific software packages. This review explores factors to consider when introducing electronic laboratory notebooks, including discussion of integration with research data management and the functionalities to compare when evaluating specific software packages.
No abstract available
Quantum computers are emerging as a major tool in the computation field, leveraging the principles of quantum mechanics to solve specific problems currently beyond the capability of classical computers. This technology holds significant promise in edge-main cloud deployments, where it can enable low-latency data processing and secure communication. This paper aims to establish a research foundation by integrating quantum computing with classical edge-cloud environments to promote performance across a range of applications that scientists are actively investigating. However, the successful deployment of hybrid quantum–classical edge-clouds requires a comprehensive evaluation framework to ensure their alignment with the performance requirements. This study first proposes a novel quantum benchmarking framework, including two distinct methods to evaluate latency scores based on the quantum transpilation levels across different quantum-edge-cloud platforms. The framework is then validated for the edge-cloud environment by benchmarking several well-known and useful quantum algorithms potentially useful in this domain, including Shor’s, Grover’s, and the Quantum Walks algorithm. An optimal transpilation level is eventually suggested to achieve maximum performance in quantum-edge-cloud environments. In summary, this research paper provides critical insights into the current and prospective capabilities of QPU integration, offering a novel benchmarking framework and providing a comprehensive assessment of their potential to enhance edge-cloud performance under varying parameters, including fidelity and transpilation levels.
Deep learning has transformed protein structure prediction, yet many experimental scientists face barriers in accessing state-of-the-art (SOTA) models due to technical complexity and hardware requirements. To address this, we present PymolFold, an open-source PyMOL plugin that seamlessly integrates cutting edge API-based protein structure predictors such as ESM-3 and Boltz2 into the molecular visualization environment. PymolFold supports both graphical and command-line interfaces for flexible usage and incorporates PXMeter, an open-source Python package for quantitative evaluation of protein structure predictions against reference data. Together, these features establish a unified “predict–visualize–analyze” workflow, lowering technical entry barriers and broadening access to advanced structural modeling. PymolFold is freely available at https://github.com/jinyuansun/PymolFold.
The High Energy Photon Source (HEPS), located at Beijing HuaiRou, is an advanced public platform for multidisciplinary innovation research and high-tech development, as well as understanding many scientific questions in the fields of physics, chemistry, biology, etc. In order to meet the diverse analysis needs of data analysis in light source disciplines, we have built a scientific computing platform that can provide desktop analysis, interactive analysis, batch analysis, and other types of computing services, and support scientists to access the computing environment through the web anytime and anywhere, quickly analyze experimental data. In this article, a scientific computing platform for HEPS’s diverse analysis requirements is designed. First, the diverse analysis requirements of HEPS are introduced. Second, the challenges faced by the HEPS scientific computing system. Third, the architecture and service process of the scientific computing platform are described from the perspective of the user, and some key technical implementations will be introduced in detail. Finally, the application effect of scientific computing platforms will be demonstrated.
The development of novel biomaterials is a challenging process, complicated by a design space with high dimensionality. Requirements for performance in the complex biological environment lead to difficult a priori rational design choices and time-consuming empirical trial-and-error experimentation. Modern data science practices, especially artificial intelligence (AI)/machine learning (ML), offer the promise to help accelerate the identification and testing of next-generation biomaterials. However, it can be a daunting task for biomaterial scientists unfamiliar with modern ML techniques to begin incorporating these useful tools into their development pipeline. This Perspective lays the foundation for a basic understanding of ML while providing a step-by-step guide to new users on how to begin implementing these techniques. A tutorial Python script has been developed walking users through the application of an ML pipeline using data from a real biomaterial design challenge based on group’s research. This tutorial provides an opportunity for readers to see and experiment with ML and its syntax in Python. The Google Colab notebook can be easily accessed and copied from the following URL: www.gormleylab.com/MLcolab
Large-scale international collaborations are increasingly managing large volumes of sensitive health data for research purposes. The use of such infrastructure requires fulfilling various technical and organizational requirements to ensure usability and security. Before data scientists can access the data, it must be imported into the infrastructure with high data security requirements. For federated analysis workflows, additional criteria, such as data harmonization and prevention of individual patient information disclosure, must also be met. This paper outlines the key components of the data infrastructure implemented in the European research project ORCHESTRA and elaborates on the methods that support federated analysis workflows in a heterogeneous legal environment. Special attention is given to data security, interoperability, and usability, from which data scientists and researchers would benefit. We demonstrate the usability of data infrastructure on a federated analysis and machine learning use cases on remote datasets which satisfies the previously mentioned requirements. Furthermore, we propose organizational measures to optimize the process, reducing the time between a data access request and granting access.
Scientific computing for particle accelerators, detectors, and experiment automation requires large computer resources. Because of this, running simulations on high-performance computing (HPC) is becoming increasingly popular. Academic research projects have proposed infrastructures that allow high remote performance capacity for running simulations and data analysis. However, these on-demand cloud services also suffer from cloud security leakage and performance degradation issues. The Vis-aS project proposes a framework which has an efficient micro-service workflow with adjusted TLS v1.3 protocol by KC-SPAKE2+ for reducing the communication overhead between the client and the HPC server to provide a high-performance and security-enabled environment. With the proposed framework, the Vis-aS project shows that a web application used for activity calculations for the high-intensity proton accelerator (HIPA) at the Paul Scherrer Institute (PSI) presents scientists a secure application and framework to fit both of requirements from HPC’s high performance demands and cloud security.
An increasing amount of researchers use software images to capture the requirements and code dependencies needed to carry out computational experiments. Software images preserve the computational environment required to execute a scientific experiment and have become a crucial asset for reproducibility. However, software images are usually not properly documented and described, making it challenging for scientists to find, reuse and understand them. In this paper, we propose a framework for automatically describing software images in a machine-readable manner by (i) creating a vocabulary to describe software images; (ii) developing an annotation framework designed to automatically document the underlying environment of software images and (iii) creating DockerPedia, a Knowledge Graph with over 150,000 annotated software images, automatically described using our framework. We illustrate the usefulness of our approach in finding images with specific software dependencies, comparing similar software images, addressing versioning problems when running computational experiments; and flagging problems with vulnerable software dependencies.
BL-774 is a control, data acquisition, and online analysis platform for addressing the requirements in future beamline experiments at SPring-8-II. To date, we have achieved implementations related to “robustness and flexibility” and “configuration management”. These implementations were made possible by the two-phase development workflow. The two-phase development is characterized by a dedicated rapid application development environment that beamline scientists can use rapidly and easily in real beamline environments, and web-based graphical user interfaces that BL-774 software developers have incorporated the achievements of the beamline scientists as part of the official release of the beamline control software. BL-774 has been introduced in two beamlines to date, and we plan to implement it in more beamlines while adding more features. We plan to integrate BL-774 seamlessly with other systems that are currently under development in the facility, such as two-dimensional detector systems and SPring-8 data centers. By introducing BL-774 in conjunction with other infrastructure in the facility, more advanced experimental operations, such as feedback operations based on online data analysis or remote operations from outside the beamlines, are to be expected. This paper presents the design and implementation of BL-774 and its introduction into a beamline at SPring-8.
High Energy Photon Source(HEPS) Experiment is expected to produce large amount of data and have diverse computing requirements for data analysis. Generally, scientists need to spend several days to setup their experimental environment, which greatly reduce the scientists’ work efficiency. In response to the above problems, we introduce a remote data analysis system for HEPS. The system provides users a web-based interactive interface based Jupyter, which makes scientists are able to process data analysis anytime and anywhere. Particularly, we discuss the system architecture as well as the key points of this system. A solution of managing and scheduling heterogeneous computing resources (CPU and GPU) is proposed, which adopts Kubernetes to achieve centralized heterogeneous resources management and resource expansion on demand. An improved Kubernetes resource scheduler is discussed, which dispatches upper applications to nodes combining with the computing cluster status. The system can transparently and quickly deploy the data analysis environment for users in seconds and reach the maximum resource utilization. We also introduce an automated deployment solution to improve the work efficiency of developers and help deploy multidisciplinary applications faster and automatically. A unified certification is illustrated to make sure the security of remote data access and data analysis. Finally, we will show the running status of the system.
本次合并将科研人员的需求细化为五个核心维度:一是制度导向需求,强调公平多元的评价体系与国家政策协同;二是职业成长需求,聚焦青年人才与研究生的心理认同及能力提升;三是组织管理需求,关注有组织科研模式下的流程优化;四是算力资源需求,强调高性能计算、量子计算与云端的便捷调度;五是工具应用需求,侧重于通过数字化工具、知识管理及降门槛技术(如AI插件)提升科研效率。整体呈现出从宏观保障到微观工具支承的全方位需求图景。