虚假信息的传播
虚假信息的传播机制与动力学扩散建模(含流行病学/博弈/对抗与真实趋势)
聚焦“虚假信息扩散的机制与动力学/传播模型”,以时间演化、状态转移与数学仿真刻画扩散规律;既包含流行病学式(SIR/SIHR等)谣言/不实信息传播建模,也包含多消息多维对抗或博弈扩散、犹豫机制等更细粒度机制。部分文献进一步把真实世界扩散趋势/危机场景(如疫情)与跨地区传播现象纳入同一传播规律框架,强调可解释的扩散预测与规律提炼。
- Fake News Propagation: A Review of Epidemic Models, Datasets, and Insights(Simone Raponi, Z. Khalifa, G. Oligeri, Roberto Di Pietro, 2022, ACM Transactions on the Web)
- Fake News Propagation: A Review of Epidemic Models, Datasets, and Insights(Simone Raponi, Z. Khalifa, G. Oligeri, Roberto Di Pietro, 2022, ACM Transactions on the Web)
- An epidemic model of rumor diffusion in online social networks(Junjun Cheng, Yun Liu, Bo Shen, Weiguo Yuan, 2013, The European Physical Journal B)
- SIHR rumor spreading model in social networks(Laijun Zhao, Jiajia Wang, Yucheng Chen, Qin Wang, Jingjing Cheng, Hongxin Cui, 2012, Physica A: Statistical Mechanics and its Applications)
- SIR rumor spreading model in the new media age(Laijun Zhao, Hongxin Cui, Xiaoyan Qiu, Xiaoli Wang, Jiajia Wang, 2013, Physica A: Statistical Mechanics and its Applications)
- Rumor spreading in social networks(Flavio Chierichetti, Silvio Lattanzi, A. Panconesi, 2009, Theoretical Computer Science)
- Rumor spreading model with noise interference in complex social networks(Liang Zhu, Youguo Wang, 2017, Physica A: Statistical Mechanics and its Applications)
- SIR-IM: SIR rumor spreading model with influence mechanism in social networks(Liqing Qiu, Wei Jia, Weinan Niu, Zhang Mingjv, Shuqi Liu, 2020, Soft Computing)
- A Diffusion Model for Multimessage Multidimensional Complex Game Based on Rumor and Anti-Rumor(Yunpeng Xiao, Wenbo Yuan, Xiangtao Yue, Tun Li, Qian Li, 2023, IEEE Transactions on Computational Social Systems)
- A mathematical model of news propagation on online social network and a control strategy for rumor spreading(J. Dhar, Ankur Jain, Vijay K. Gupta, 2016, Social Network Analysis and Mining)
- The impact of group propagation on rumor spreading in mobile social networks(Ebrahim Sahafizadeh, B. T. Ladani, 2018, Physica A: Statistical Mechanics and its Applications)
- Rumor spreading model considering hesitating mechanism in complex social networks(Ling-Ling Xia, Guoping Jiang, B. Song, Yurong Song, 2015, Physica A: Statistical Mechanics and its Applications)
- Diffusion Pixelation: A Game Diffusion Model of Rumor & Anti-Rumor Inspired by Image Restoration(Yunpeng Xiao, Zhenhai Huang, Qian Li, Xingyu Lu, Tun Li, 2023, IEEE Transactions on Knowledge and Data Engineering)
- A Diffusion Model for Multimessage Multidimensional Complex Game Based on Rumor and Anti-Rumor(Yunpeng Xiao, Wenbo Yuan, Xiangtao Yue, Tun Li, Qian Li, 2023, IEEE Transactions on Computational Social Systems)
- A Diffusion Model for Multimessage Multidimensional Complex Game Based on Rumor and Anti-Rumor(Yunpeng Xiao, Wenbo Yuan, Xiangtao Yue, Tun Li, Qian Li, 2023, IEEE Transactions on Computational Social Systems)
- Modeling Propagation Dynamics and Developing Optimized Countermeasures for Rumor Spreading in Online Social Networks(Zaobo He, Zhipeng Cai, Xiaoming Wang, 2015, 2015 IEEE 35th International Conference on Distributed Computing Systems)
- The diffusion of misinformation on social media: Temporal pattern, message, and source(J Shin, L Jian, K Driscoll, F Bar, 2018, Computers in Human Behavior)
- Diffusion of disinformation: How social media users respond to fake news and why(Edson C. Tandoc, D. Lim, Rich Ling, 2020, Journalism)
- How Misinformation Diffuses on Online Social Networks: Radical Opinions, Adaptive Relationship, and Algorithmic Intervention(Mengyi Zhang, Qingxing Dong, Xiaozhen Wu, 2025, IEEE Transactions on Computational Social Systems)
- Cultural Evolution and Digital Media: Diffusion of Fake News About COVID-19 on Twitter(Danilo Vicente Batista de Oliveira, U. Albuquerque, 2021, SN Computer Science)
- Trends in the diffusion of misinformation on social media(Hunt Allcott, M. Gentzkow, Chuan Yu, 2018, Research & Politics)
- The diffusion of misinformation on social media: Temporal pattern, message, and source(J Shin, L Jian, K Driscoll, F Bar, 2018, Computers in Human Behavior)
- Diffusion of disinformation: How social media users respond to fake news and why(Edson C. Tandoc, D. Lim, Rich Ling, 2020, Journalism)
- How Misinformation Diffuses on Online Social Networks: Radical Opinions, Adaptive Relationship, and Algorithmic Intervention(Mengyi Zhang, Qingxing Dong, Xiaozhen Wu, 2025, IEEE Transactions on Computational Social Systems)
- COVID-19 fake news diffusion across Latin America(Wilson Ceron, Gabriela Gruszynski Sanseverino, Mathias-Felipe de-Lima-Santos, M. G. Quiles, 2021, Social Network Analysis and Mining)
- Get Back! You Don't Know Me Like That: The Social Mediation of Fact Checking Interventions in Twitter Conversations(Anikó Hannák, Drew B. Margolin, Brian Keegan, Ingmar Weber, 2014, Proceedings of the International AAAI Conference on Web and Social Media)
- The diffusion of misinformation on social media: Temporal pattern, message, and source(J Shin, L Jian, K Driscoll, F Bar, 2018, Computers in Human Behavior)
虚假信息的治理干预:博弈控制、平台/算法监管与效果评估
以“治理/干预”为研究对象,讨论如何通过抑制策略、博弈控制、平台算法监管、以及辟谣/事实核查/媒体素养等工具改变扩散结果。该组强调干预有效性评估与副作用(如回弹效应、持续效应、用户心理差异等),并以博弈或控制导向将“传播建模—抑制目标—策略效果”形成闭环。
- An evolutionary game model for analysis of rumor propagation and control in social networks(Mojgan Askarizadeh, B. T. Ladani, M. Manshaei, 2019, Physica A: Statistical Mechanics and its Applications)
- A study of rumor control strategies on social networks(R. M. Tripathy, A. Bagchi, S. Mehta, 2010, Proceedings of the 19th ACM international conference on Information and knowledge management)
- Intervention analysis for fake news diffusion: an evolutionary game theory perspective(Jusheng Liu, Mei Song, Guiyuan Fu, 2024, Nonlinear Dynamics)
- Containment of rumor spread in complex social networks(Lan Yang, Zhiwu Li, A. Giua, 2020, Information Sciences)
- Rumor management in public health: a system dynamics analysis based on social trust(Wei Dong, Yijie Wang, Fei Li, 2025, Frontiers in Public Health)
- Prominent misinformation interventions reduce misperceptions but increase scepticism(Emma Hoes, Brian Aitken, Jingwen Zhang, Tomasz Gackowski, Magdalena Wojcieszak, 2024, Nature Human Behaviour)
- Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media(Katherine Clayton, S. Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, Morgan Sandhu, Rachel Sang, Rachel Scholz-Bright, Austin T. Welch, Andrew G. Wolff, Amanda Zhou, B. Nyhan, 2019, Political Behavior)
- Real-Time and Cost-Effective Limitation of Misinformation Propagation(Juliana Litou, V. Kalogeraki, I. Katakis, D. Gunopulos, 2016, 2016 17th IEEE International Conference on Mobile Data Management (MDM))
- Countering Misinformation(J. Roozenbeek, Eileen Culloty, Jane Suiter, 2023, European Psychologist)
- How effective are interventions against misinformation(Sacha Altay, 2026, PsyArXiv)
- Effects of fact-checking social media vaccine misinformation on attitudes toward vaccines.(Jingwen Zhang, J. D. Featherstone, Christopher Calabrese, Magdalena E. Wojcieszak, 2020, Preventive Medicine)
- Effects of fact-checking social media vaccine misinformation on attitudes toward vaccines.(Jingwen Zhang, J. D. Featherstone, Christopher Calabrese, Magdalena E. Wojcieszak, 2020, Preventive Medicine)
- Debunking “Fake News” on Social Media: Short-Term and Longer-Term Effects of Fact Checking and Media Literacy Interventions(Lara Marie Berger, A. Kerkhof, Felix Mindl, Johannes Munster, 2023, SSRN Electronic Journal)
- Reactions to Fact Checking(D. Appling, A. Bruckman, M. D. Choudhury, 2022, Proceedings of the ACM on Human-Computer Interaction)
- Sustaining Exposure to Fact-checks: Misinformation Discernment, Media Consumption, and its Political Implications(Jeremy Bowles, Kevin Croke, Horacio Larreguy, John Marshall, Shelley Liu, 2023, SSRN Electronic Journal)
- Educative Interventions to Combat Misinformation: Evidence from a Field Experiment in India(Sumitra Badrinathan, 2021, American Political Science Review)
- Bridging Interests and Truth: Towards Mitigating Fake News with Personalized and Truthful Recommendations(Zihan Ma, Minnan Luo, Yiran Hao, Zhi Zeng, Xiangzheng Kong, Jiahao Wang, 2025, Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval)
- Developing a Framework for Fake News Diffusion Control (FNDC) on Digital Media (DM): A Systematic Review 2010–2022(S. A. Khan, Khurram Shahzad, Omer Shabbir, Abid Iqbal, 2022, Sustainability)
- Developing a Framework for Fake News Diffusion Control (FNDC) on Digital Media (DM): A Systematic Review 2010–2022(S. A. Khan, Khurram Shahzad, Omer Shabbir, Abid Iqbal, 2022, Sustainability)
- Regulating algorithmic disinformation(H Sun, 2022, Colum. JL & Arts)
- Toolbox of individual-level interventions against online misinformation(A. Kozyreva, Philipp Lorenz-Spreen, Stefan M. Herzog, Ullrich K. H. Ecker, Stephan Lewandowsky, R. Hertwig, Ayesha Ali, Joe Bak-Coleman, Sarit Barzilai, Melisa Basol, Adam J. Berinsky, C. Betsch, John Cook, Lisa K. Fazio, Michael Geers, A. Guess, Haifeng Huang, Horacio Larreguy, R. Maertens, F. Panizza, Gordon Pennycook, David G. Rand, Steve Rathje, Jason Reifler, P. Schmid, Mark Smith, B. Swire‐Thompson, Paula Szewach, S. van der Linden, Sam Wineburg, 2024, Nature Human Behaviour)
- Toolbox of individual-level interventions against online misinformation(A. Kozyreva, Philipp Lorenz-Spreen, Stefan M. Herzog, Ullrich K. H. Ecker, Stephan Lewandowsky, R. Hertwig, Ayesha Ali, Joe Bak-Coleman, Sarit Barzilai, Melisa Basol, Adam J. Berinsky, C. Betsch, John Cook, Lisa K. Fazio, Michael Geers, A. Guess, Haifeng Huang, Horacio Larreguy, R. Maertens, F. Panizza, Gordon Pennycook, David G. Rand, Steve Rathje, Jason Reifler, P. Schmid, Mark Smith, B. Swire‐Thompson, Paula Szewach, S. van der Linden, Sam Wineburg, 2024, Nature Human Behaviour)
- Countering Misinformation(J. Roozenbeek, Eileen Culloty, Jane Suiter, 2023, European Psychologist)
- User agency–based versus machine agency–based misinformation interventions: The effects of commenting and AI fact-checking labeling on attitudes toward the COVID-19 vaccination(Jiyoung Lee, Kimberly L. Bissell, 2023, New Media & Society)
- Dynamical Modeling, Analysis, and Control of Information Diffusion over Social Networks: A Deep Learning-Based Recommendation Algorithm in Social Network(Kefei Cheng, X. Guo, Xiaotong Cui, Fengchi Shan, 2020, Discrete Dynamics in Nature and Society)
- Regulating algorithmic disinformation(H Sun, 2022, Colum. JL & Arts)
虚假信息传播治理综述与工具化框架(全景归纳)
属于综述/框架类总结:系统梳理虚假信息传播与反制(含平台与个体干预)的知识结构,覆盖问题定义、研究挑战、方法谱系与应用思路,偏“全景式归纳与工具化框架”,为建模、检测与治理研究提供统一视角。
- Fake News Propagation and Mitigation Techniques: A Survey(A. Saxena, Pratishtha Saxena, Harita Reddy, 2021, Smart Innovation, Systems and Technologies)
- Toolbox of individual-level interventions against online misinformation(A. Kozyreva, Philipp Lorenz-Spreen, Stefan M. Herzog, Ullrich K. H. Ecker, Stephan Lewandowsky, R. Hertwig, Ayesha Ali, Joe Bak-Coleman, Sarit Barzilai, Melisa Basol, Adam J. Berinsky, C. Betsch, John Cook, Lisa K. Fazio, Michael Geers, A. Guess, Haifeng Huang, Horacio Larreguy, R. Maertens, F. Panizza, Gordon Pennycook, David G. Rand, Steve Rathje, Jason Reifler, P. Schmid, Mark Smith, B. Swire‐Thompson, Paula Szewach, S. van der Linden, Sam Wineburg, 2024, Nature Human Behaviour)
- Systematic Review of Fake News, Propaganda, and Disinformation: Examining Authors, Content, and Social Impact Through Machine Learning(D. Plikynas, Ieva Rizgelienė, Gražina Korvel, 2025, IEEE Access)
- Fake news, disinformation and misinformation in social media: a review(Esma Aïmeur, Sabrine Amri, Gilles Brassard, 2023, Social Network Analysis and Mining)
- Systematic Review of Fake News, Propaganda, and Disinformation: Examining Authors, Content, and Social Impact Through Machine Learning(D. Plikynas, Ieva Rizgelienė, Gražina Korvel, 2025, IEEE Access)
- Fake news, disinformation and misinformation in social media: a review(Esma Aïmeur, Sabrine Amri, Gilles Brassard, 2023, Social Network Analysis and Mining)
在线社交网络中的虚假信息检测与识别(信号度量/监督无监督/数据资源)
聚焦“检测/识别”问题:利用内容表征、传播拓扑/时序与传播延迟等信号完成真假识别或监测部署。既包含监督/无监督检测方法,也包含传播证据驱动的网络/轨迹特征建模、以及结合数据资源(如带标注的数据仓库)支持检测任务。该组的核心是把虚假信息识别当作可计算信号的提取与判别任务。
- Detecting misinformation in online social networks using cognitive psychology(K. K. Kumar, G. Geethakumari, 2014, Human-centric Computing and Information Sciences)
- Detecting misinformation in online social networks before it is too late(Huiling Zhang, Alan Kuhnle, Huiyuan Zhang, M. Thai, 2016, 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM))
- Mining misinformation in social media(L Wu, F Morstatter, X Hu, H Liu, 2016, … in complex and social networks)
- Mining misinformation in social media(L Wu, F Morstatter, X Hu, H Liu, 2016, … in complex and social networks)
- Detecting Misinformation in Social Networks Using Provenance Data(Mohamed Jehad Baeth, M. Aktaş, 2018, Concurrency and Computation: Practice and Experience)
- Detecting Misinformation on Social Media using Community Insights and Contrastive Learning(Oguzhan Ozcelik, Cagri Toraman, Fazli Can, 2024, ACM Transactions on Intelligent Systems and Technology)
- Hierarchical Propagation Networks for Fake News Detection: Investigation and Exploitation(Kai Shu, Deepak Mahudeswaran, Suhang Wang, Huan Liu, 2019, Proceedings of the International AAAI Conference on Web and Social Media)
- Tracing Fake-News Footprints: Characterizing Social Media Messages by How They Propagate(Liang Wu, Huan Liu, 2018, Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining)
- Prominent Features of Rumor Propagation in Online Social Media(Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, Yajun Wang, 2013, 2013 IEEE 13th International Conference on Data Mining)
- FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media(Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, Huan Liu, 2018, Big Data)
- Unsupervised Fake News Detection on Social Media: A Generative Approach(Shuo Yang, Kai Shu, Suhang Wang, Renjie Gu, Fan Wu, Huan Liu, 2019, Proceedings of the AAAI Conference on Artificial Intelligence)
- Identifying Disinformation from Online Social Media via Dynamic Modeling across Propagation Stages(Shuai Xu, Jianqiu Xu, Shuo Yu, Bohan Li, 2024, Proceedings of the 33rd ACM International Conference on Information and Knowledge Management)
- Misinformation in Online Social Networks: Detect Them All with a Limited Budget(Huiling Zhang, M. A. Alim, Xiang Li, M. Thai, Hien T. Nguyen, 2016, ACM Transactions on Information Systems)
- Logic-based analysis of fake news diffusion on social media(Valeria Fionda, 2025, Social Network Analysis and Mining)
- Tracing the fake news propagation path using social network analysis(S. Sivasankari, G. Vadivu, 2021, Soft Computing)
- Source detection of rumor in social network - A review(Sushila Shelke, V. Attar, 2019, Online Social Networks and Media)
- Detecting misinformation in online social networks using cognitive psychology(K. K. Kumar, G. Geethakumari, 2014, Human-centric Computing and Information Sciences)
- A Novel Approach for Detection of Fake News on Social Media Using Metaheuristic Optimization Algorithms(Feyza Altunbey Ozbay, B. Alatas, 2019, Elektronika ir Elektrotechnika)
- A novel approach to fake news detection in social networks using genetic algorithm applying machine learning classifiers(Deepjyoti Choudhury, Tapodhir Acharjee, 2022, Multimedia Tools and Applications)
- A novel approach to fake news detection in social networks using genetic algorithm applying machine learning classifiers(Deepjyoti Choudhury, Tapodhir Acharjee, 2022, Multimedia Tools and Applications)
- Fake News Detection in Social Media: A Systematic Review(Francisco D. C. Medeiros, R. Braga, 2020, XVI Brazilian Symposium on Information Systems)
- Fake News Detection in Social Networks via Crowd Signals(Sebastian Tschiatschek, A. Singla, M. Gomez-Rodriguez, Arpit Merchant, A. Krause, 2017, Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW '18)
- Misinformation Detection on Online Social Media-A Survey(R. Kaliyar, Navya Singh, 2019, 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT))
- Deep learning for misinformation detection on online social networks: a survey and new perspectives(Md. Rafiqul Islam, S. Liu, Xianzhi Wang, Guandong Xu, 2020, Social Network Analysis and Mining)
- A Survey of Approaches to Early Rumor Detection on Microblogging Platforms: Computational and Socio‐Psychological Insights(Lazarus Kwao, Yang Yang, Jie Zou, Jing Ma, 2025, WIREs Data Mining and Knowledge Discovery)
- Fake News Detection on Social Media: A Data Mining Perspective(Kai Shu, A. Sliva, Suhang Wang, Jiliang Tang, Huan Liu, 2017, ACM SIGKDD Explorations Newsletter)
- Fake news detection within online social media using supervised artificial intelligence algorithms(Feyza Altunbey Ozbay, B. Alatas, 2020, Physica A: Statistical Mechanics and its Applications)
- Detecting misinformation in online social networks using cognitive psychology(K. K. Kumar, G. Geethakumari, 2014, Human-centric Computing and Information Sciences)
- Mining misinformation in social media(L Wu, F Morstatter, X Hu, H Liu, 2016, … in complex and social networks)
基于机器学习/深度学习的虚假信息检测与表示学习建模(含领域化检测)
强调“机器学习/深度学习与神经表示”的检测/建模方法体系:通过CNN/序列建模与特征交互学习提升识别准确度与早期检测能力;同时覆盖领域化检测(如健康领域)与利用表示学习/模型化增强传播特征表达的路线。该组与上组的区别在于更突出神经/ML模型构建与特征学习机制。
- A Convolutional Approach for Misinformation Identification(Feng Yu, Q. Liu, Shu Wu, Liang Wang, T. Tan, 2017, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence)
- A Convolutional Approach for Misinformation Identification(Feng Yu, Q. Liu, Shu Wu, Liang Wang, T. Tan, 2017, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence)
- A Convolutional Approach for Misinformation Identification(Feng Yu, Q. Liu, Shu Wu, Liang Wang, T. Tan, 2017, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence)
- A novel approach to fake news detection in social networks using genetic algorithm applying machine learning classifiers(Deepjyoti Choudhury, Tapodhir Acharjee, 2022, Multimedia Tools and Applications)
- A novel approach to fake news detection in social networks using genetic algorithm applying machine learning classifiers(Deepjyoti Choudhury, Tapodhir Acharjee, 2022, Multimedia Tools and Applications)
- Analysis and Detection of Health-Related Misinformation on Chinese Social Media(Yue Liu, K. Yu, Xiaofei Wu, L. Qing, Yonghong Peng, 2019, IEEE Access)
- Analysis and Detection of Health-Related Misinformation on Chinese Social Media(Yue Liu, K. Yu, Xiaofei Wu, L. Qing, Yonghong Peng, 2019, IEEE Access)
- A Survey of Approaches to Early Rumor Detection on Microblogging Platforms: Computational and Socio‐Psychological Insights(Lazarus Kwao, Yang Yang, Jie Zou, Jing Ma, 2025, WIREs Data Mining and Knowledge Discovery)
- Deep learning for misinformation detection on online social networks: a survey and new perspectives(Md. Rafiqul Islam, S. Liu, Xianzhi Wang, Guandong Xu, 2020, Social Network Analysis and Mining)
- A Rumor Propagation Model Based on User Cognition and Evolutionary Game(Rong Wang, Zerui Wu, Liangyu Wang, Chaolong Jia, Yunpeng Xiao, 2025, ACM Transactions on Knowledge Discovery from Data)
- Rumor Diffusion Model Based on Representation Learning and Anti-Rumor(Yunpeng Xiao, Qiufan Yang, Chunyan Sang, Yan-bing Liu, 2020, IEEE Transactions on Network and Service Management)
- Modeling the Diffusion of Fake and Real News through the Lens of the Diffusion of Innovations Theory(Abishai Joy, R. Pathak, Anu Shrestha, F. Spezzano, Donald Winiecki, 2024, ACM Transactions on Social Computing)
- Modeling the Diffusion of Fake and Real News through the Lens of the Diffusion of Innovations Theory(Abishai Joy, R. Pathak, Anu Shrestha, F. Spezzano, Donald Winiecki, 2024, ACM Transactions on Social Computing)
虚假信息扩散趋势识别与传播预测(多特征融合/注意力/阶段预测)
聚焦“传播预测/趋势识别”而非单纯真假分类:把扩散数量或传播阶段当作预测目标,使用多源特征融合与注意力等机制刻画扩散轨迹与典型模式,强调预测性能与可解释特征贡献。
- Are you influenced?: modeling the diffusion of fake news in social media(Abishai Joy, Anu Shrestha, Francesca Spezzano, 2021, Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining)
- Disinformation Propagation Trend Analysis and Identification Based on Social Situation Analytics and Multilevel Attention Network(Junchang Jing, Feihu Li, Bin Song, Zhi Zhang, K. Choo, 2023, IEEE Transactions on Computational Social Systems)
- Prominent misinformation interventions reduce misperceptions but increase scepticism(Emma Hoes, Brian Aitken, Jingwen Zhang, Tomasz Gackowski, Magdalena Wojcieszak, 2024, Nature Human Behaviour)
- Hierarchical Propagation Networks for Fake News Detection: Investigation and Exploitation(Kai Shu, Deepak Mahudeswaran, Suhang Wang, Huan Liu, 2019, Proceedings of the International AAAI Conference on Web and Social Media)
- Disinformation Propagation Trend Analysis and Identification Based on Social Situation Analytics and Multilevel Attention Network(Junchang Jing, Feihu Li, Bin Song, Zhi Zhang, K. Choo, 2023, IEEE Transactions on Computational Social Systems)
因果去偏与推荐/网络偏置:偏置如何塑造虚假信息扩散
围绕“偏置与机制性因素如何塑造虚假信息扩散/检测偏差”:从因果去偏角度移除流行度/从众偏置,提升扩散与检测结论的可解释性;同时从推荐算法与网络分隔(如意识形态隔离)角度分析其如何加剧失真扩散,并提出改进方向。
- CausalRD: A Causal View of Rumor Detection via Eliminating Popularity and Conformity Biases(Weifeng Zhang, Ting Zhong, Ce Li, Kunpeng Zhang, Fan Zhou, 2022, IEEE INFOCOM 2022 - IEEE Conference on Computer Communications)
- Understanding the Contribution of Recommendation Algorithms on Misinformation Recommendation and Misinformation Dissemination on Social Networks(R. Pathak, Francesca Spezzano, M. S. Pera, 2023, ACM Transactions on the Web)
- Analysing the Effect of Recommendation Algorithms on the Spread of Misinformation(Miriam Fernandez, Alejandro Bellogín, Iván Cantador, 2024, ACM Web Science Conference)
- Network segregation and the propagation of misinformation(Jonas Stein, Marc Keuschnigg, A. van de Rijt, 2023, Scientific Reports)
- Understanding the Contribution of Recommendation Algorithms on Misinformation Recommendation and Misinformation Dissemination on Social Networks(R. Pathak, Francesca Spezzano, M. S. Pera, 2023, ACM Transactions on the Web)
- Analysing the Effect of Recommendation Algorithms on the Spread of Misinformation(Miriam Fernandez, Alejandro Bellogín, Iván Cantador, 2024, ACM Web Science Conference)
溯源与传播路径识别(来源检测/链路追踪/溯源数据)
面向“溯源与传播路径识别”:通过图结构推断、溯源/来源定位与传播链追踪,识别谣言源头或关键传播路径,为控制与治理提供可操作的定位信息。
- Logic-based analysis of fake news diffusion on social media(Valeria Fionda, 2025, Social Network Analysis and Mining)
- Tracing the fake news propagation path using social network analysis(S. Sivasankari, G. Vadivu, 2021, Soft Computing)
- Source detection of rumor in social network - A review(Sushila Shelke, V. Attar, 2019, Online Social Networks and Media)
生成式AI驱动的谣言传播链与治理困境
专门讨论“生成式AI驱动的谣言/虚假信息传播链”及其治理困境:包括技术检测与可追溯性、社会媒介素养与法律问责等层面的应对建议,体现该领域的新兴议题与跨机制挑战。
- Generative AI–Driven Rumor Propagation Chains and the Dilemma of Legal Governance(Wenjun Wu, 2026, Journal of Education, Humanities and Social Sciences)
- Bridging Interests and Truth: Towards Mitigating Fake News with Personalized and Truthful Recommendations(Zihan Ma, Minnan Luo, Yiran Hao, Zhi Zeng, Xiangzheng Kong, Jiahao Wang, 2025, Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval)
虚假信息研究的概念界定与任务框架(定义/范式/边界)
聚焦“概念界定与任务框架”的基础工作,澄清社交媒体中 misinformation/检测任务的边界与差异,并统筹比较扩散与检测研究范式,为后续研究选型提供统一问题定义。
- Misinformation in Social Media: Definition, Manipulation, and Detection(Liang Wu, Fred Morstatter, Kathleen M. Carley, Huan Liu, 2019, ACM SIGKDD Explorations Newsletter)
数据集与治理框架支撑(扩散标注资源与研究底座)
提供研究所需的“数据集与资源/框架支撑”:通过构建或汇总带标注真伪与扩散链信息的数据资源,使模型训练、评估与实证分析可落地;同时配套治理框架用于指导研究设计。
- FibVID: Comprehensive fake news diffusion dataset during the COVID-19 period(Jisu Kim, Ji A Aum, Sang Eun Lee, Yeonju Jang, Eunil Park, Daejin Choi, 2021, Telematics and Informatics)
- Developing a Framework for Fake News Diffusion Control (FNDC) on Digital Media (DM): A Systematic Review 2010–2022(S. A. Khan, Khurram Shahzad, Omer Shabbir, Abid Iqbal, 2022, Sustainability)
- The diffusion of misinformation on social media: Temporal pattern, message, and source(J Shin, L Jian, K Driscoll, F Bar, 2018, Computers in Human Behavior)
合并后的统一分组将虚假信息传播研究梳理为并列的九个方向:①传播机制与动力学扩散建模(含流行病学/博弈/真实趋势);②治理干预(博弈控制、平台/算法监管与效果评估);③传播治理综述与工具化框架;④在线检测识别(基于传播信号与监督/无监督);⑤机器学习/深度学习与领域化检测;⑥扩散趋势识别与传播预测;⑦因果去偏与推荐/网络偏置;⑧溯源与传播路径追踪;⑨生成式AI驱动传播链与治理困境;同时保留了概念界定任务框架与数据集/框架支撑两类基础要素。
总计102篇相关文献
Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.
Fake news propagation is a complex phenomenon influenced by a multitude of factors whose identification and impact assessment is challenging. Although many models have been proposed in the literature, the one capturing all the properties of a real fake-news propagation phenomenon is inevitably still missing. Modern propagation models, mainly inspired by old epidemiological models, attempt to approximate the fake-news propagation phenomena by blending psychological factors, social relations, and user behavior. This work provides an in-depth analysis of the current state of fake-news propagation models supported by real-world datasets. We highlighted similarities and differences in the modeling approaches, wrapping up the main research trends. Propagation models, transitions, network topologies, and performance metrics have been identified and discussed in detail. The thorough analysis we provided in this article, coupled with the highlighted research hints, have a high potential to pave the way for future research in the area.
Identifying disinformation from online social media is crucial for maintaining a credible cyberspace. Although features from the content and propagation topology are widely exploited by existing studies to distinguish disinformation from normal ones, they are becoming less effective as content can be intentionally written to mislead readers and topological features are difficult to be extracted due to the high variance and diversity of reposting trees. Moreover, related works mainly focus on modeling the complete information propagation event, ignoring the staged evolution patterns along with propagation, which may also degrade the detection performance. In this paper, we conceive and implement a novel framework called DMPS for identifying disinformation, which Dynamically Models diverse topological structures of reposting trees as well as the textual content streams across different Propagation Stages. In particular, DMPS learns expressive representations of the structural features via meta-trees and extracts sequential features of the content for intra-stage modeling, then it captures temporal dependencies for inter-stage modeling. The whole framework is optimized in a binary classification manner. Experiments based on multilingual social media datasets validate the effectiveness and superiority of DMPS over state-of-the-art models. We believe that this study can provide insights for crisis management in response to disinformation in social network campaigns.
In the wake of the 2016 U.S. presidential election, social-media platforms are facing increasing pressure to combat the propagation of “fake news” (i.e., articles whose content is fabricated). Motivated by recent attempts in this direction, we consider the problem faced by a social-media platform that is observing the sharing actions of a sequence of rational agents and is dynamically choosing whether to conduct an inspection (i.e., a “fact-check”) of an article whose validity is ex ante unknown. We first characterize the agents’ inspection and sharing actions and establish that, in the absence of any platform intervention, the agents’ news-sharing process is prone to the proliferation of fabricated content, even when the agents are intent on sharing only truthful news. We then study the platform’s inspection problem. We find that because the optimal policy is adapted to crowdsource inspection from the agents, it exhibits features that may appear a priori nonobvious; most notably, we show that the optimal inspection policy is nonmonotone in the ex ante probability that the article being shared is fake. We also investigate the effectiveness of the platform’s policy in mitigating the detrimental impact of fake news on the agents’ learning environment. We demonstrate that in environments characterized by a low (high) prevalence of fake news, the platform’s policy is more effective when the rewards it collects from content sharing are low relative to the penalties it incurs from the sharing of fake news (when the rewards it collects from content sharing are high in absolute terms).
… figure, fake news can … misinformation/disinformation, and so on. Unlike social bots, trolls are generally human users behind their keyboard contributing to misinformation/disinformation …
Consuming news from social media is becoming increasingly popular. However, social media also enables the wide dissemination of fake news. Because of the detrimental effects of fake news, fake news detection has attracted increasing attention. However, the performance of detecting fake news only from news content is generally limited as fake news pieces are written to mimic true news. In the real world, news pieces spread through propagation networks on social media. The news propagation networks usually involve multi-levels. In this paper, we study the challenging problem of investigating and exploiting news hierarchical propagation network on social media for fake news detection.In an attempt to understand the correlations between news propagation networks and fake news, first, we build hierarchical propagation networks for fake news and true news pieces; second, we perform a comparative analysis of the propagation network features from structural, temporal, and linguistic perspectives between fake and real news, which demonstrates the potential of utilizing these features to detect fake news; third, we show the effectiveness of these propagation network features for fake news detection. We further validate the effectiveness of these features from feature importance analysis. We conduct extensive experiments on real-world datasets and demonstrate the proposed features can significantly outperform state-of-the-art fake news detection methods by at least 1.7% with an average F1>0.84. Altogether, this work presents a data-driven view of hierarchical propagation network and fake news and paves the way towards a healthier online news ecosystem.
… over social media websites and have proposed various techniques to combat fake news. In this chapter, we discuss propagation models for misinformation and review the fake news …
… fake news may manipulate the content to make it look like real news. To address this problem, this paper concentrates on modeling the propagation … and classify propagation pathways …
Digital disinformation, such as those occurring on online social networks (OSNs), can influence public opinion, create mistrust and division, and impact decision- and policy-making. In this study, we propose a disinformation diffusion trend analysis and identification method, which uses social situation analytics and a multilevel attention network. First, we present a division and feature representation approach of social user circle based on the content sequence (internal driving factor) and social contextual information (external driving factor) of users associated with disinformation. Second, disinformation content feature, crowd response feature, and time-series feature are represented using embedding layer and bidirectional long short-term memory neural networks (Bi-LSTMs). We also present an attention mechanism model based on multifeature fusion, which can dynamically adjust the weight of each feature. On this foundation, the fused features are fed into the multilayer perceptron to identify the propagation quantity trend. According to the experimental results of real-world OSNs and social situation metadata, we conclude that while disinformation occurs across OSN platforms, the disinformation is more likely to spread widely in the original OSN platform. We also identify four typical disinformation propagation trends based on propagation patterns and propagation peak times. Findings from our experiments demonstrate that our proposed approach accurately identifies and predicts the diffusion trend of disinformation, which can then be used to inform mitigation strategy.
Social networks are a platform for individuals and organizations to connect with each other and inform, advertise, spread ideas, and ultimately influence opinions. These platforms have been known to propel misinformation. We argue that this could be compounded by the recommender algorithms that these platforms use to suggest items potentially of interest to their users, given the known biases and filter bubbles issues affecting recommender systems. While much has been studied about misinformation on social networks, the potential exacerbation that could result from recommender algorithms in this environment is in its infancy. In this manuscript, we present the result of an in-depth analysis conducted on two datasets (Politifact FakeNewsNet dataset and HealthStory FakeHealth dataset) in order to deepen our understanding of the interconnection between recommender algorithms and misinformation spread on Twitter. In particular, we explore the degree to which well-known recommendation algorithms are prone to be impacted by misinformation. Via simulation, we also study misinformation diffusion on social networks, as triggered by suggestions produced by these recommendation algorithms. Outcomes from this work evidence that misinformation does not equally affect all recommendation algorithms. Popularity-based and network-based recommender algorithms contribute the most to misinformation diffusion. Users who are known to be superspreaders are known to directly impact algorithmic performance and misinformation spread in specific scenarios. Findings emerging from our exploration result in a number of implications for researchers and practitioners to consider when designing and deploying recommender algorithms in social networks.
… propagating a news piece, we contribute to fill this research gap and further confirm the potential of using propagation features to detect fake news … of time than fake news. Secondly, we …
How does the ideological segregation of online networks impact the spread of misinformation? Past studies have found that homophily generally increases diffusion, suggesting that partisan news, whether true or false, will spread farther in ideologically segregated networks. We argue that network segregation disproportionately aids messages that are otherwise too implausible to diffuse, thus favoring false over true news. To test this argument, we seeded true and false informational messages in experimental networks in which subjects were either ideologically integrated or segregated, yielding 512 controlled propagation histories in 16 independent information systems. Experimental results reveal that the fraction of false information circulating was systematically greater in ideologically segregated networks. Agent-based models show robustness of this finding across different network topologies and sizes. We conclude that partisan sorting undermines the veracity of information circulating on the Internet by increasing exposure to content that would otherwise not manage to diffuse.
… protected throughout the propagation process, we define them to be susceptible to misinformation. Inactive users (R): Users that are unaffected by the propagation of either credible or …
… fake news source and removed them from the network which naturally confirms the reduction of their propagation… is to identify the propagation path of the fake news content by collecting …
In recent years, the world has witnessed a global outbreak of fake news, propaganda and disinformation (FNPD) flows on online social networks (OSN). In the context of information warfare and the capabilities of generative AI, FNPDs have proliferated. They have become a powerful and quite effective tool for influencing people’s social identities, attitudes, opinions and even behavior. Ad hoc malicious social media accounts and organized networks of trolls and bots target countries, societies, social groups, political campaigns and individuals. As a result, conspiracy theories, echo chambers, filter bubbles and other processes of fragmentation and marginalization are polarizing, radicalizing, and disintegrating society in terms of coherent politics, governance, and social networks of trust and cooperation. This systematic review aims to explore advances in using machine and deep learning to detect FNPD in OSNs effectively. We present the results of a combined PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) review in three analysis domains: 1) propagators (authors, trolls, and bots), 2) textual content, 3) social impact. This systemic research framework integrates meta-analyses of three research domains, providing an overview of the wider research field and revealing important relationships between these research domains. It not only addresses the most promising ML/DL research methodologies and hybrid approaches in each domain, but also provides perspectives and insights on future research directions.
… practices to viral fake stories, … in fake news shared on digital media. The study also identified challenges being faced by people to control the spread of fake news on social networking …
This study sought to investigate factors causing the spread of fake news on digital media (DM) and to explore the sometimes disastrous consequences of fake news on social media. The study also aimed to construct a framework for fake news disaster management to control the dangers of false news on DM. The study applied PRISMA guidelines and techniques for exploring, devising, and inclusion and exclusion criteria. The search was carried out through 15 of the world’s leading digital databases. As a result, 31 peer-reviewed studies published in impact-factor journals of leading databases were included. Findings showed that several factors influenced the sharing of fake news on digital media (DM) platforms. Six major trending factors were the rise of technologies, social connections, political reasons, the absence of a controlling center, online business and marketing, and quick dissemination of information. The study identified the disadvantages of fake news (FN) on digital media (DM). A framework was constructed for managing fake news disasters to control the spread of fake news on digital media. This paper offers important theoretical contributions through the development of a framework for controlling fake news spread on digital media and by providing a valuable addition to the existing body of knowledge. The study offers practical assistance to top management, decision makers, and policymakers to devise policies to effectively manage problems caused by fake news dissemination. It provides practical strategies to address fake news disasters on digital media for redefining social values. This research also assists digital media managers in utilizing the proposed framework and controlling the harmful impact of fake news on social media.
We propose an approach inspired by the diffusion of innovations theory to model and characterize fake news sharing in social media through the lens of the different levels of influential factors (users, networks, and news). We address the problem of predicting fake news sharing as a classification task and demonstrate the potentials of the proposed features by achieving an AUROC of 0.97 and an average precision of 0.88, consistently outperforming baseline models with a higher margin (about 30% of AUROC). Also, we show that news-based features are the most effective at predicting real and fake news sharing, followed by the user- and network-based features.
… We represent the diffusion of a news item on social media as a directed, attributed graph in which each node corresponds to an event–either the original news post, a subsequent tweet (…
This exploratory study seeks to understand the diffusion of disinformation by examining how social media users respond to fake news and why. Using a mixed-methods approach in an explanatory-sequential design, this study combines results from a national survey involving 2501 respondents with a series of in-depth interviews with 20 participants from the small but economically and technologically advanced nation of Singapore. This study finds that most social media users in Singapore just ignore the fake news posts they come across on social media. They would only offer corrections when the issue is strongly relevant to them and to people with whom they share a strong and close interpersonal relationship.
Disinformation (fake news) is a major problem that affects modern populations, especially in an era when information can be spread from one corner of the world to another in just one click. The diffusion of misinformation becomes more problematic when it addresses issues related to health, as it can affect people at both the individual and population levels. Through the ideas proposed by cultural evolution theory, in this study, we seek to understand the dynamics of disseminating messages (cultural traits) with untrue content (maladaptive traits). For our investigation, we used the scenario caused by the Coronavirus Disease 2019 (COVID-19) pandemic as a model. The instability caused by the pandemic provides a good model for the study of adapted and maladaptive traits, as the information can directly affect individual and population fitness. Through data collected on the Twitter platform (259,176 tweets) and using machine learning techniques and web scraping, we built a predictive model to analyze the following questions: (1) Is false information more shared? (2) Is false information more adopted? (3) Do people with social prestige influence the dissemination of maladaptive traits of COVID-19? We observed that fake news features contained in messages with false information were shared and adopted as unblemished messages. We also observed that social prestige was not a determining factor for the diffusion of maladaptive traits. Even with the ability to allow connections between individuals participating in social media, some factors such as attachment to cultural traits and the formation of social bubbles can favor isolation and decrease connectivity between individuals. Consequently, in the scenario of isolation between groups and low connectivity between individuals, there is a reduction in cultural exchange between people, which interferes with the dynamics of the selection of cultural traits. Thus, maladaptive (harmful) traits are favored and maintained in the cultural system. We also argue that the local Brazilian cultural context can be a determining factor for maintaining maladaptive traits. We conclude that in an unstable (pandemic) scenario, the information transmitted on Twitter is not reliable in relation to the increase in fitness, which may occur because of the low cultural exchange promoted by the personalization of the social network and cultural context of the population.
Modeling the Diffusion of Fake and Real News through the Lens of the Diffusion of Innovations Theory
These days, people have increasingly used social media as a go-to resource for any information need and daily news diet. In the past decade, the news ecosystem and information flow have been dramatically transformed by the popularity of such platforms. Social media users can, in fact, easily access nearly any kind of information and then spread it nearly without friction through activities such as tweets/retweets in Twitter (now X) and similar means on other social media. This seemingly innocuous activity of spreading information has a collective consequence of making social media users responsible for radical changes in the way news is distributed, including both authentic and fake news. Moreover, malicious individuals have been implicated in capitalizing on the ease of introducing and spreading information in these platforms to create misinformation, spread it to a wider audience, and subsequently influence public opinion on important topics through information diffusion. Therefore, understanding the factors that motivate a user’s decision to share is of paramount importance in understanding the information diffusion phenomenon in social media. In this article, we propose an approach based on the Diffusion of Innovation theory to model, characterize, and compare real and fake news sharing in social media with a focus on different levels of influencing factors including innovation, communication channels, and social system. We apply that approach to identify factors related to the spread of fake news as they relate to users, the structure of news items themselves, and the networks through which news is circulated. We address the problem of predicting real and fake news sharing as a classification task and demonstrate the potentials of the proposed features by achieving an AUROC of around 0.97 and an average precision ranging from 0.88 to 0.95, consistently outperforming baseline models with a higher margin (at least 13% of average precision). In addition, we also found out that empirically identifiable characteristics of news items themselves and users who share news are the strongest element allowing accurate prediction of real and fake news sharing, followed by network-based features. Moreover, our proposed approach can be effectively used to model news diffusion as a multi-step propagation process.
In recent years, there has been widespread concern that misinformation on social media is damaging societies and democratic institutions. In response, social media platforms have announced actions to limit the spread of false content. We measure trends in the diffusion of content from 569 fake news websites and 9540 fake news stories on Facebook and Twitter between January 2015 and July 2018. User interactions with false content rose steadily on both Facebook and Twitter through the end of 2016. Since then, however, interactions with false content have fallen sharply on Facebook while continuing to rise on Twitter, with the ratio of Facebook engagements to Twitter shares decreasing by 60%. In comparison, interactions with other news, business, or culture sites have followed similar trends on both platforms. Our results suggest that the relative magnitude of the misinformation problem on Facebook has declined since its peak.
… Information diffusion on social media is one of the most … , since social media allow users to share any kind of news without, … to analyze the spread of fake news on social media. First, we …
… fake news on social media, considering the reputation of both government and social media … Within the game, the government can opt for supervision or no supervision, the social media …
Social media has become a popular means for people to consume and share the news. At the same time, however, it has also enabled the wide dissemination of fake news, that is, news with intentionally false information, causing significant negative effects on society. To mitigate this problem, the research of fake news detection has recently received a lot of attention. Despite several existing computational solutions on the detection of fake news, the lack of comprehensive and community-driven fake news data sets has become one of major roadblocks. Not only existing data sets are scarce, they do not contain a myriad of features often required in the study such as news content, social context, and spatiotemporal information. Therefore, in this article, to facilitate fake news-related research, we present a fake news data repository FakeNewsNet, which contains two comprehensive data sets with diverse features in news content, social context, and spatiotemporal information. We present a comprehensive description of the FakeNewsNet, demonstrate an exploratory analysis of two data sets from different perspectives, and discuss the benefits of the FakeNewsNet for potential applications on fake news study on social media.
… to predict the diffusion of fake news is essential. This paper proposes a fake news diffusion prediction model involving several psycho-sociological facets of social media users. The …
The rise of social networks as the primary means of communication in almost every country in the world has simultaneously triggered an increase in the amount of fake news circulating online. The urgent need for models that can describe the growing infodemic of fake news has been highlighted by the current pandemic. The resulting slowdown in vaccination campaigns due to misinformation and generally the inability of individuals to discern the reliability of information is posing enormous risks to the governments of many countries. In this research using the tools of kinetic theory, we describe the interaction between fake news spreading and competence of individuals through multi-population models in which fake news spreads analogously to an infectious disease with different impact depending on the level of competence of individuals. The level of competence, in particular, is subject to evolutionary dynamics due to both social interactions between agents and external learning dynamics. The results show how the model is able to correctly describe the dynamics of diffusion of fake news and the important role of competence in their containment. This article is part of the theme issue ‘Kinetic exchange models of societies and economies’.
Fact-checking verifies a multitude of claims and remains a promising solution to fight fake news. The spread of rumors, hoaxes, and conspiracy theories online is evident in times of crisis, when fake news ramped up across platforms, increasing fear and confusion among the population as seen in the COVID-19 pandemic. This article explores fact-checking initiatives in Latin America, using an original Markov-based computational method to cluster topics on tweets and identify their diffusion between different datasets. Drawing on a mixture of quantitative and qualitative methods, including time-series analysis, network analysis and in-depth close reading, our article proposes an in-depth tracing of COVID-related false information across the region, comparing if there is a pattern of behavior through the countries. We rely on the open Twitter application programming interface connection to gather data from public accounts of the six major fact-checking agencies in Latin America, namely Argentina (Chequeado), Brazil (Agência Lupa), Chile (Mala Espina Check), Colombia (Colombia Check from Consejo de Redacciín), Mexico (El Sabueso from Animal Polótico) and Venezuela (Efecto Cocuyo). In total, these profiles account for 102,379 tweets that were collected between January and July 2020. Our study offers insights into the dynamics of online information dissemination beyond the national level and demonstrates how politics intertwine with the health crisis in this period. Our method is capable of clustering topics in a period of overabundance of information, as we fight not only a pandemic but also an infodemic, evidentiating opportunities to understand and slow the spread of false information.
As the SARS-CoV-2 (COVID-19) pandemic has run rampant worldwide, the dissemination of misinformation has sown confusion on a global scale. Thus, understanding the propagation of fake news and implementing countermeasures has become exceedingly important to the well-being of society. To assist this cause, we produce a valuable dataset called FibVID (Fake news information-broadcasting dataset of COVID-19), which addresses COVID-19 and non-COVID news from three key angles. First, we provide truth and falsehood (T/F) indicators of news items, as labeled and validated by several fact-checking platforms (e.g., Snopes and Politifact). Second, we collect spurious-claim-related tweets and retweets from Twitter, one of the world’s largest social networks. Third, we provide basic user information, including the terms and characteristics of “heavy fake news” user to present a better understanding of T/F claims in consideration of COVID-19. FibVID provides several significant contributions. It helps to uncover propagation patterns of news items and themes related to identifying their authenticity. It further helps catalog and identify the traits of users who engage in fake news diffusion. We also provide suggestions for future applications of FibVID with a few exploratory analyses to examine the effectiveness of the approaches used.
This study examines dynamic communication processes of political misinformation on social media focusing on three components: the temporal pattern, content mutation, and sources …
In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing deceptive information has been motivated by considerable political and social backlashes in the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of false news. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news.
Recommendation algorithms (RAs) have been pointed out as one of the major culprits of misinformation spreading in the digital sphere.1 However, it is still unclear how these algorithms propagate misinformation, e.g., which particular recommendation approaches are more prone to suggest misinforming items, or which internal parameters of the algorithms could be influencing more on their misinformation propagation capacity. Motivated by this fact, in this work, we present an analysis of the effect of some of the most popular recommendation algorithms on the spread of misinformation on Twitter (X). A set of guidelines on how to adapt these algorithms is provided based on such analysis and a comprehensive review of the research literature.
The advent of information distribution mechanism constituted by self-exploration, network neighbors, and especially algorithms, has aroused widespread concerns about the reinforcement of misinformation beliefs and the resulting polarization. However, few existing researches fully consider the inherent characteristics of misinformation (e.g. evoking repulsive effects), as well as the adaptive nature of social relationship or come to see the impacts of algorithmic interventions on online misinformation and the formation process of social groups. To comprehensively investigate the coevolution process of user misinformation beliefs and social relationships under algorithmic interventions, we proposed a novel model with configurations as follows: 1) a nonlinear social influence function constructed to reflect the process of reinforcing misinformation beliefs; 2) probabilities for the rewiring of links among individuals determined by their opinion distance and social distance; and 3) multiple algorithmic mechanisms reformulated, regarding five recommendation processes and the information distribution rules integrating three information sources. Such extensive numerical simulation experiments have revealed diversification, radicalization, and polarization of misinformation. We observe that the introduction of moderate repulsive interactions fosters the emergence of diverse opinions. In absence of algorithmic interventions, misinformation naturally evolves into radicalization, while the introduction of algorithmic interventions exacerbates polarization, particularly with extensive reliance on content-based recommendations and excessive allowance of distributed opinions from recommendations. It is noteworthy that we discover that encouraging recommendation based on predetermined information effectively reverses the trend of misinformation evolution. Our research contributes to clarifying the interaction between human behavior and artificial intelligence, as well as providing insights for misinformation supervision and governance.
The spread of online rumor poses challenges to social peace and public order. Traditional research on rumor diffusion commences from the rumor itself, without considering the symbiosis and confrontation of anti-rumor and motivation-rumor. This study proposed a diffusion method for online rumor based on three messages: rumor, anti-rumor, and motivation-rumor. First, considering the ability of representation learning to learn unsupervised features, we decided to use representation learning method to the diversity and complexity of the content and structure feature space. In particular, we designed a new representation method—Rumor2vec—for the potential structural feature of the rumor diffusion network. Second, considering the mutual promotion and suppression of the three messages, we constructed a new network topology using the cooperative and competitive relationships based on the evolutionary game theory. Finally, considering the ability of graph convolutional network (GCN) to convolute non-Euclidean structure data such as social network, and in view of the time effectiveness of topic evolution, this study proposed a dynamic and game-GCN (evolutionary game theory GCN)-based rumor diffusion model. Experiments show that the model can not only predict the group behavior under the rumor topic but also accurately reflect the cooperation and competition among multiple messages.
The rapid spread of online rumors has significant negative impacts on the online ecosystem and social order, which is closely tied to users’ cognitive traits. To explore the mechanisms of rumor propagation driven by cognition and mitigate its harm, we propose an information diffusion model based on the rumor, antirumor, and cognitive game. First, to address the uncertainty and difficulty in quantifying cognitive biases, and considering the advantages of fuzzy logic theory in handling uncertainty, a fuzzy logic-based algorithm for measuring user cognitive biases is proposed. Moreover, recognizing the nonlinear relationships among various factors, polynomial functions are introduced as the output of fuzzy rules to more accurately describe these complex relationships. Second, regarding the symbiotic and antagonistic relationships among multiple types of rumor information under the influence of cognitive biases, and leveraging the strengths of game theory in analyzing complex systems characterized by coexistence of symbiosis and antagonism, an evolutionary game-based rumor-antirumor user behavior mechanism is developed. This provides a robust theoretical foundation for understanding user state transitions and their evolutionary patterns. Finally, integrating the aforementioned research, and considering that the dynamic evolution of user cognition leads to variations in trust responses and attitudes toward rumor and antirumor information, the concepts of trust states—trust in rumor (TR) and trust in antirumor (TA)—are introduced into the classical susceptible-infectious-recovered (SIR) model. On this basis, a rumor propagation susceptible-trused-infectious-recovered (STIR) model incorporating user cognitive biases and evolutionary game theory is further constructed. Experimental results demonstrate that this model effectively reveals the game dynamics of multiple types of rumor information, providing a more efficient framework for studying rumor propagation in social networks.
Under the influence of Generative Artificial Intelligence (Generative AI), the mechanisms and chains of rumor generation and dissemination have undergone significant transformation. Generative AI not only reduces the cost of rumor fabrication but also enhances the credibility and deceptiveness of false information. Diverse motivational drivers, precision-targeted dissemination, audience psychological mechanisms, and algorithmic recommendation systems collectively accelerate the spread and amplification of AI-generated rumors. However, current legal governance faces multiple dilemmas: the rapid evolution of AI technology has outpaced legislative progress; evidentiary identification is complicated by the technical opacity of AI systems; and cross-border coordination remains fragmented. To address these challenges, this paper argues that: (1) at the technical level, regulatory mechanisms should be developed to strengthen AI content detection, traceability, and authenticity verification; (2) at the societal level, public empowerment should be enhanced through media literacy education to block rumor dissemination; and (3) at the legal level, accountability mechanisms should be refined to optimize the governance of AI-generated rumors.
The recommendation algorithm can break the restriction of the topological structure of social networks, enhance the communication power of information (positive or negative) on social networks, and guide the information transmission way of the news in social networks to a certain extent. In order to solve the problem of data sparsity in news recommendation for social networks, this paper proposes a deep learning-based recommendation algorithm in social network (DLRASN). First, the algorithm is used to process behavioral data in a serializable way when users in the same social network browse information. Then, global variables are introduced to optimize the encoding way of the central sequence of Skip-gram, in which way online users’ browsing behavior habits can be learned. Finally, the information that the target users’ have interests in can be calculated by the similarity formula and the information is recommended in social networks. Experimental results show that the proposed algorithm can improve the recommendation accuracy.
This study is inspired by the current image restoration technology. If we regard the users participating in the rumor as image pixels, similar to social networks, the recovery of pixel data is affected by the pixels themselves and neighbor pixels, then the prediction of user behavior in the rumor diffusion can be regarded as the process of image restoration for pixel-blurred user behavior images. We first propose a Diffusion2pixel algorithm that transforms the user relationship network of topic diffusion into image pixel matrix. To cope with the diversity and complexity of the diffusion feature space, the user relationship network is reduced to a low-rank dense vectorization by representation learning before being pixelated by cutting and diffusion. Second, considering the competitive relationship between rumor and anti-rumor, transition matrix of rumor mutual influences is established by evolutionary game theory. A mutual influence model of rumor and anti-rumor is then proposed. Finally, we combine the transition matrix of rumor mutual influence into a simple prediction method Graph-CNN of rumor and anti-rumor topic diffusion based on dynamic iteration mechanism. Experiments confirmed the proposed model can effectively predict the group diffusion trends of rumor, and reflects the competitive relationship between rumor and anti-rumor.
In social networks, studying rumor propagation patterns is essential for curbing the spread of rumors. Given the coexistence and conflict of multiple-type rumor information, as well as users’ cognitive differences, this article presents a rumor propagation model grounded in user cognition and evolutionary game theory. First, considering the potential impact of social relationships between users on rumor propagation, the KD-Tree algorithm is employed to uncover hidden connections between users, thereby enriching the topology of the user’s social network. Second, a user behavior driving mechanism for rumor, anti-rumor, and motivation-rumor types is constructed based on evolutionary games to reflect the interactive and strategic nature of users’ responses. Moreover, the Lotka-Volterra equation is utilized to explore the dynamic game of multi-type rumor information and the cognitive process of users. Finally, to address differences in users’ cognition, this article introduces the anti-rumor trust state A and the motivation-rumor trust state M, which arise from users’ exposure to multiple types of rumor information. Based on these trust states, a rumor propagation model, SIAMR, is constructed using user cognition and evolutionary game theory. Experiments demonstrate that the model accurately captures the dynamic interactions between multi-type rumor information and the transmission process of rumor topics in social networks. The proposed model integrates cognitive psychology with a strategic interaction framework, offering a more realistic representation of rumor propagation behavior in the real world. Experimental results reveal that SIAMR improves prediction accuracy by 14.23% over baseline models in simulating the dynamics of multiple types of rumors, effectively capturing users’ cognitive influences and the mechanisms of information competition.
While the proliferation of fake news poses a significant threat to information integrity, existing efforts to counter it, especially within personalized news recommendation systems, have proven inadequate. Traditional methods, which often rely on classifiers to filter out fake content, are limited by their accuracy and their inability to fully capture the diverse interests of users. To address these challenges, we proposed PRISM --- Protection-enhanced Recommendation with Interest-aware Sequential Modeling --- a novel framework based on diffusion models. PRISM harnesses the generative and control capabilities of diffusion models to progressively learn the implicit distribution of user interests from their reading history, thereby generating personalized recommendations that align with both their linguistic preferences and interest domains. Furthermore, PRISM incorporates pre-trained authenticity representations as constraints during content generation, ensuring the credibility of the recommended news and effectively curbing the spread of fake news. Comprehensive evaluations from multiple dimensions demonstrate the superiority of our model.
A large amount of disinformation on social media has penetrated into various domains and brought significant adverse effects. Understanding their roots and propagation becomes desired in both academia and industry. Prior literature has developed many algorithms to identify this disinformation, particularly rumor detection. Some leverage the power of deep learning and have achieved promising results. However, they all focused on building predictive models and improving forecast accuracy, while two important factors - popularity and conformity biases - that play critical roles in rumor spreading behaviors are usually neglected.To overcome such an issue and alleviate the bias from these two factors, we propose a rumor detection framework to learn debiased user preference and effective event representation in a causal view. We first build a graph to capture causal relationships among users, events, and their interactions. Then we apply the causal intervention to eliminate popularity and conformity biases and obtain debiased user preference representation. Finally, we leverage the power of graph neural networks to aggregate learned user representation and event features for the final event type classification. Empirical experiments conducted on two real-world datasets demonstrate the effectiveness of our proposed approach compared to several cutting-edge baselines.
The traditional rumor diffusion model primarily studies the rumor itself and user behavior as the entry points. The complexity of user behavior, multidimensionality of the communication space, imbalance of the data samples, and symbiosis and competition between rumor and anti-rumor are challenges associated with the in-depth study on rumor communication. Given these challenges, this study proposes a group behavior model for rumor and anti-rumor. First, this study considers the diversity and complexity of the rumor propagation feature space and the advantages of representation learning in the feature extraction of data. Further, we adopt the corresponding representation learning methods for their content and structure of the rumor and anti-rumor to reduce the spatial feature dimension of the rumor-spreading data and to uniformly and densely express the full-featured information feature representation. Second, this paper introduces an evolutionary game theory, which is combined with the user-influenced rumor and anti-rumor, to reflect the conflict and symbiotic relationship between rumor and anti-rumor. we obtain a network structural feature expression of the influence degree of users on rumor and anti-rumor when expressing the structural characteristics of group communication relationships. Finally, aiming at the timeliness of rumor topic evolution, the whole model is proposed. Time slice and discretize the life cycle of rumor is used to synthesize the full-featured information feature representation of rumor and anti-rumor. The experiments denote that the model can not only effectively analyze user group behavior regarding rumor but also accurately reflect the competition and symbiotic relation between rumor and anti-rumor diffusion.
In online social networks, rumors have become a flashpoint for public opinion. This article proposes a rumor and anti-rumor diffusion model based on multi-information multidimensional compound game considering the complex game antagonism in the process of spreading multiple messages. First, in view of the diversity and complexity of the characteristic space of communication individual, communication structure, and multitype rumor and anti-rumor news, the communication network is expressed in low-dimensional, real-value, and dense vectorization by combining the representation learning algorithm and centering on the content, structure, and attribute of information transmission. Second, considering the antagonistic and competitive relationship between multiple rumors and anti-rumors, and considering the compound iterative cascade of multi-information rumor, this article proposes a multi-information rumor and anti-rumor game model from a multidimensional perspective. Finally, the information expression of the topic space and the complex game relationship between multiple messages are considered comprehensively. At the same time, considering the graph convolutional network (GCN)’s ability of convolution processing non-Euclidean data such as social network, a dynamic, unified representation, multimessage complex game topic propagation model MMGameGCN is proposed based on graph convolutional neural network (MultiMessage-game GCN). The experiment shows that the model can more truly and effectively reflect the process of network rumor propagation in the multimessage transmission network, effectively analyze the user group behavior of rumor topic under multiple messages, and correctly perceive the spread situation of rumor and anti-rumor.
… at 1146 ("Here we investigate the differential diffusion of true, … tweets containing false rumors or will do so in the future. … 59 Recommendation algorithms can distort the character …
Social media, particularly microblogging platforms, are essential for rapid information sharing and public discussion but often allow rumors, that is, unverified information, to spread rapidly during events or persist over time. These platforms also offer opportunities to study the dynamics of rumors and develop computational methods to assess their veracity. In this paper, we provide a comprehensive review of existing theoretical foundations, interdisciplinary challenges, and emerging advancements in rumor detection research, with a focus on integrating theoretical and computational approaches. Drawing on insights from computer science, cognitive psychology, and sociology, we explore methodologies, such as multimodal fusion, graph‐based models, and attention mechanisms, while highlighting gaps in real‐world scalability, ethical transparency, and cross‐platform adaptability. Using a systematic literature review and bibliometric analysis, we identify trends, methods, and gaps in current research. Our findings emphasize interdisciplinary collaboration to develop adaptable, efficient, and ethical rumor detection strategies. We also highlight the critical role of combining socio‐psychological insights with advanced computational techniques to address the human factors in rumor spread. Furthermore, we emphasize the importance of designing systems that remain effective across diverse cultural and linguistic contexts, enhancing their global applicability. We propose a conceptual framework integrating diverse theories and computational techniques, offering a roadmap for improving detection systems and addressing misinformation challenges on microblogging platforms.
Background With the rapid development of digital society and the Internet, public health systems are increasingly confronted with novel phenomena and emergent challenges originating from cyberspace. Single medical interventions often prove insufficient to address the complex and multifaceted nature of contemporary health issues, necessitating the integration of interdisciplinary expertise. Methods This study focuses on the propagation mechanisms of online rumors during the early stages of the COVID-19 pandemic, highlighting their intricate interactions with social trust. Using a system dynamics approach, a comprehensive “generation–diffusion–dissipation” model of rumors was constructed, revealing the differentiated role of social trust across stages. Results Trust deficits create fertile ground for rumor emergence, while the impersonation and exploitation of trust facilitate rapid rumor diffusion beyond the bounds of rational skepticism. Conversely, trust reconstruction serves as a critical driver for rumor dissipation and the restoration of social cognition. The spread of rumors is influenced not only by information veracity but also by the embedding of social relationships, emotional mobilization, and manipulation of trust pathways. Conclusion Effective rumor governance in public health systems requires a shift from reactive “post hoc debunking” toward proactive “preemptive prevention,” encompassing transparent information disclosure, interactive communication mechanisms, targeted interventions against “trust hijacking,” and trust reconstruction strategies guided by social-psychological restoration. This study employs a system dynamics approach to elucidate the mechanisms of rumor propagation while empirically validating the dynamic role of trust within the governance system. It not only offers a new analytical framework for understanding the evolution of trust under digital social conditions but also provides strategic insights for enhancing the governance capacity of public health systems in responding to rumors.
… differences between rumor spreading and epidemic spreading in social networks, … , a new rumor spreading model, Susceptible-Infected-Hibernator-Removed (SIHR) model, is …
The study of rumor spreading has become an important issue on complex social networks. On the basis of prior studies, we propose a modified susceptible–exposed–infected–…
… the rumor spreading. We show that the introduction of trust mechanism reduces the … rumor size and the velocity of rumor spreading, but increases the critical thresholds on both networks…
… of rumor spreading in the classic preferential attachment model of Bollobás et al. which is considered to be a valuable model for social networks. We prove that, in these networks: (a) …
… (2015), we introduce an SEI news propagation model on social networks. A total number of users (N) are subdivided into different compartments. The schematic flow is described in Fig. …
Abstract The ubiquity of handheld devices provides straightforward access to the Internet and Social networking. The quick and easy updates from social networks help users in many situations like natural disasters, man-made disasters, etc. In such situations, individuals share information with the people in their network without checking the veracity of posts, which leads to the issue of rumor diffusion in a social network. Detection of rumor and source identification plays a vital role to control the diffusion of misinformation in a social network and also a good research domain in social network analysis. Source detection of such misinformation is often interesting and challenging task due to the fast diffusion of information and dynamic evolution of the social network. Accurate and quick detection of the rumor source is a very important and useful task in many application domains like source of disease in an epidemic model, start of virus spread, source of information or rumor in a social network. Most of the existing reviews which focused on source detection relate to various application domains and network perspective. But as per the need of current social networking usage and its influence on the society, it is a crucial and important topic to review the source detection approaches in the social network. The objective of this paper is to study and analyze the source detection approaches of rumor or misinformation in a social network. As an outcome of the literature study, we present the pictorial taxonomy of factors to be considered for the source detection approach and the classification of current source detection approaches in the social network. The focus has been given to various state-of-the-art source detection approaches of rumor or misinformation and comparison between approaches in social networks. This paper also focused on research challenges in current source detection approaches, public datasets and future research directions.
Abstract Nowadays, social networks are widely used as fast and ubiquitous media for sharing information. Rumor as unverified information also considerably spreads in social networks. The study of how rumor spreads and how it can be controlled, plays an important role in reducing social and psychological damages of rumor in social networks. Although recent researches have mainly focused on epidemic models and structure of social networks, they ignore the impact of people’s decision on rumor process. In this paper, an evolutionary game model is proposed to analyze the rumor process in social network considering the impacts of people’s decisions on rumor propagation and control. The model considers a rumor control mechanism via sending anti-rumor messages through rumor control centers. Factors affecting the people’s decisions including social anxiety, people’s attitude toward rumor/anti-rumor, strength of rumor/anti-rumor, influence of rumor control centers, and participation of people in discussions are studied in the model. The proposed game model is analyzed by replicator dynamics equations and simulation of the imitation update rule on a synthetic (Barabasi–Albert) and two real-world graphs of Twitter and Facebook. We further analyze the model in various environments considering people characteristics and society situation. Also we use a real rumor dataset of Twitter (Pheme dataset) to first compare the trends of people strategies (rumor/anti-rumor spreader and ignorant) derived by the model with the real trends of the traits of people in the rumor spreading on Twitter. Then we conduct a number of sensitivity analysis experiments to show the impact of different factors on rumor process. In fact, we analyze the trends of people strategies in Pheme dataset assuming various possible conditions. The analysis show that propagation of convincing anti-rumor messages and locating rumor control centers impact debunking the rumor. Moreover, it is shown that people attitude toward rumor/anti-rumor has significant impact on rumor spreading. Besides, factors such as social anxiety and strength of rumor accelerates rumor propagation.
… diffusion in online social networks, which is just one of our motivations in this paper. … real social network, we will analyze numerically the dynamic behavior of the model in next section. …
… In this paper we study and evaluate rumor-like methods for combating the spread of rumors on a social network. We model rumor spread as a diffusion process on a network and …
Abstract Rumors can propagate at great speed through social networks and produce significant damages. In order to control rumor propagation, spreading correct information to counterbalance the effect of the rumor seems more appropriate than simply blocking rumors by censorship or network disruption. In this paper, a competitive diffusion model, namely Linear Threshold model with One Direction state Transition (LT1DT), is proposed for modeling competitive information propagation of two different types in the same network. The problem of minimizing rumor spread in social networks is explored and a novel heuristic based on diffusion dynamics is proposed to solve this problem under the LT1DT. Experimental analysis on four different networks shows that the novel heuristic outperforms pagerank centrality. By seeding correct information in the proximity of rumor seeds, the novel heuristic performs as well as the greedy approach in scale-free and small-world networks but runs three orders of magnitude faster than the greedy approach.
… to contrast the proposed model with the traditional models, and perform simulation … of rumor spreading. The experiments show that (1) the rumor propagation simulated by our model …
Abstract A group in a mobile social network is normally considered as a particular contact in which invited individuals can share messages. People in a mobile social network sometimes share rumor messages with the contacts in the group that are not necessarily familiar with them. They normally get the rumor messages posted by different users and forward them to the other individuals or groups. There are some models for analysis of rumor propagation in mobile social networks. However, none of them have considered the concept of rumor propagation into groups of nodes. In this paper we study the rumor spreading in mobile social networks when the concept of group propagation is also considered. For this purpose, we extend the SIR information propagation model and investigate the impact of group propagation on the dynamics of rumor spreading process. We conduct steady-state analysis to investigate the basic reproduction number of the rumor spreading in the model. Furthermore, agent-based modeling and simulation is used to analyze the final size of the rumor under various group propagation rates as well as the impacts of group parameters on group spreading dynamics. The simulation results obtained by Monte Carlo method show that group propagation effectively increases the rumor spreading speed. We show that having large groups is more effective on rumor spreading than having more groups. Furthermore we analyze the influence of network structure on rumor spreading when group propagation is considered. For this purpose, two Erdős–Renyi and Barabasi–Albert models of social networks are considered and it is shown that rumor spreading behavior in these networks have no significant differences when we have rumor propagation in groups.
In this paper, a modified susceptible–infected–removed (SIR) model has been proposed to explore rumor diffusion on complex social networks. We take variation of connectivity into …
… Inspired by various mechanisms of rumor spreading model, in this paper, we propose a new model with influence mechanism, which is close to the real rumor spreading process by the …
Social networks have become one kind of the most important information media in the world, and the study of the phenomena of rumor propagation in social networks is helpful to understand the intrinsic laws of propagation behavior. We distinguish two propagation channels, that is, point to point propagation and group propagation, of rumor spreading on social networks. Thus we propose an improved SIR model and establish the corresponding mean-field equation. By using the differential dynamics method and the next generation matrix theory, the equilibrium point and the basic regeneration number R0 are calculated. Moreover, the geometric method is used to prove the asymptotic stability of the model at the equilibrium point and the bifurcation phenomenon at R0=1. Finally, numerical simulations are carried out to verify the correctness of the theoretical results, and the influence of the rumor spreading mechanisms on the propagation process is analyzed.
… introduced a standard model of rumor spreading in 1965 which is called DK model [14], many … (MK) model [15]. We also adopt epidemic model as basis to describe the rumor spreading. …
The problem of identifying rumors is of practical importance especially in online social networks, since information can diffuse more rapidly and widely than the offline counterpart. In this paper, we identify characteristics of rumors by examining the following three aspects of diffusion: temporal, structural, and linguistic. For the temporal characteristics, we propose a new periodic time series model that considers daily and external shock cycles, where the model demonstrates that rumor likely have fluctuations over time. We also identify key structural and linguistic differences in the spread of rumors and non-rumors. Our selected features classify rumors with high precision and recall in the range of 87% to 92%, that is higher than other states of the arts on rumor classification.
… of new media, eg, microblogging, rumors spread faster and … rumor spreading process with the SIR (Susceptible, Infected, and Recovered) model, and thus makes the rumor spreading …
Abstract: Developing effective interventions to counter misinformation is an urgent goal, but it also presents conceptual, empirical, and practical difficulties, compounded by the fact that misinformation research is in its infancy. This paper provides researchers and policymakers with an overview of which individual-level interventions are likely to influence the spread of, susceptibility to, or impact of misinformation. We review the evidence for the effectiveness of four categories of interventions: boosting (psychological inoculation, critical thinking, and media and information literacy); nudging (accuracy primes and social norms nudges); debunking (fact-checking); and automated content labeling. In each area, we assess the empirical evidence, key gaps in knowledge, and practical considerations. We conclude with a series of recommendations for policymakers and tech companies to ensure a comprehensive approach to tackling misinformation.
ABSTRACT The onset of the COVID-19 pandemic was accompanied with a pandemic of fake news spreading over social media (SM). Fact checking might help combat fake news and a plethora of fact-checking platforms exist, yet few people actually use them. Moreover, whether fact checking is effective in preventing citizens from falling for fake news, particularly COVID-19 related, is unclear. Against this backdrop, we examine potential antecedents to fact checking that can be a target for interventions and establish that fact checking is actually effective for preventing the public from falling for harmful COVID-19 fake news. We use a representative U.S. sample collected in April of 2020 and find that awareness of fake news and patterns of active SM use (e.g., commenting on content instead of reading it) increases the fact checking of COVID-19 fake news, whereas SM homophily reduces fact checking and the effects of SM use as users are trapped in “echo chambers”. We also find that fact checking helps users identify accurate information on how to protect themselves against COVID-19 instead of false and often harmful claims propagated on SM. These findings highlight the importance of fact checking for combating COVID-19 fake news and help identify potential interventions.
The way a problem is framed shapes its solutions. This article reframes the problemof misinformation and examines the implications of this shift for interventions againstmisinformation. It advances five arguments that challenge common narratives aboutmisinformation and invite us to rethink both the problem and its solutions. For instance,exposure to misinformation is lower than often believed, people are less gullible thancommonly assumed, and misinformation often reflects, rather than causes, underlyingsociopolitical issues. These insights point toward strategies that address the root causesof the problem rather than surface symptoms. Key shifts include focusing on the demandfor misinformation, fostering trust in reliable sources, and strengthening democraticinstitutions. Combating misinformation effectively requires a clear understanding of theproblem and a break with popular misconceptions about it.
ABSTRACT Although previous research has offered important insights into the consequences of mis- and disinformation and the effectiveness of corrective information, we know markedly less about how different types of corrective information – news media literacy interventions and fact-checkers – can be combined to counter different forms of misinformation. Against this backdrop, this paper reports on experiments in the US and the Netherlands (N = 1,091) that exposed people to evidence-based or fact-free anti-immigration misinformation, fact-checkers and/or a media literacy intervention. The main findings indicate that evidence-based misinformation is seen as more accurate than fact-free misinformation, and the combination of news media literacy interventions and fact-checkers is most effective in lowering issue agreement and perceived accuracy of misinformation across countries. These findings have important implications for journalism practice and policy makers that aim to combat mis- and disinformation.
… Our toolbox does not evaluate the interventions’ potential … misinformation interventions, we aimed to ensure that our toolbox covers all relevant interventions in the field of misinformation …
Current interventions to combat misinformation, including fact-checking, media literacy tips and media coverage of misinformation, may have unintended consequences for democracy. We propose that these interventions may increase scepticism towards all information, including accurate information. Across three online survey experiments in three diverse countries (the United States, Poland and Hong Kong; total n = 6,127), we tested the negative spillover effects of existing strategies and compared them with three alternative interventions against misinformation. We examined how exposure to fact-checking, media literacy tips and media coverage of misinformation affects individuals’ perception of both factual and false information, as well as their trust in key democratic institutions. Our results show that while all interventions successfully reduce belief in false information, they also negatively impact the credibility of factual information. This highlights the need for further improved strategies that minimize the harms and maximize the benefits of interventions against misinformation. This study reveals that current interventions against misinformation erode belief in accurate information. The authors argue that future strategies should shift their focus from only fighting falsehoods to also nurturing trust in reliable news.
… fact checking to a brief media literacy intervention. We show that the impact of fact checking is limited to the corrected fake news… two weeks after the intervention. A plausible mechanism …
… We evaluate an intervention in South Africa that encouraged … broader understanding of misinformation, how to combat it, and … fact-checks can not only debunk the specific misinformation …
ABSTRACT While research consistently shows that fact-checking improves belief accuracy, debates persist about how to best measure and interpret expressions of factual beliefs. We argue that this has led to ambiguity in interpreting the results of studies on fact-checking, including whether fact-checking effects in fact decrease confidently held false beliefs. In a two-wave, nationally representative online experiment on beliefs about immigration, we use a variety of theoretically motivated approaches toward observing the influence of fact-checking messages. Results suggest that the effects of fact-checking are robust to different methods of measuring misinformed beliefs – even after accounting for belief certainty – and across different analytical approaches. Effects are evident among those who harbored inaccurate beliefs with high degrees of confidence. We conclude with a discussion of the implications of these findings for future studies of corrections and practical implications for fact-checking efforts.
This study aimed to examine the effects of commenting on a Facebook misinformation post by comparing a user agency–based intervention and machine agency–based intervention in the form of artificial intelligence (AI) fact-checking labeling on attitudes toward the COVID-19 vaccination. We found that both interventions were effective at promoting positive attitudes toward vaccination compared to the misinformation-only condition. However, the intervention effects manifested differently depending on participants’ residential locations, such that the commenting intervention emerged as a promising tool for suburban participants. The effectiveness of the AI fact-checking labeling intervention was pronounced for urban populations. Neither of the fact-checking interventions showed salient effects with the rural population. These findings suggest that although user agency- and machine agency–based interventions might have potential against misinformation, these interventions should be developed in a more sophisticated way to address the unequal effects among populations in different geographic locations.
Abstract Fact-checking as a specific genre has become more important than ever over the past two decades to counter misinformation. However, we know from previous research that people rarely actively search for fact-checks. This study therefore argues for the importance of fact-checks as so-called direct content interventions on social media. More specifically, this article discusses the findings about an innovative way to implement these fact-check interventions, namely through the Tooties, cartoon characters that kindly point out the incorrectness of a refutable claim. Based on both a real-life implementation of the Tooties and an online experiment, this study provides insights about the feasibility and effectiveness of the Tooties as an innovative type of fact-check intervention compared to some of the more widely used types of fact-check interventions. This can prompt further research into the added value of fact-check interventions on social media and help news media and fact-check organizations in developing and implementing them.
Social media vaccine misinformation can negatively influence vaccine attitudes. It is urgent to develop communication approaches to reduce the misinformation's impact. This study aimed to test the effects of fact-checking labels for misinformation on attitudes toward vaccines. An online survey experiment with 1198 participants recruited from a U.S. national sample was conducted in 2018. Participants were randomly assigned to six conditions: misinformation control, or fact-checking label conditions attributed to algorithms, news media, health institutions, research universities, or fact-checking organizations. We analyzed differences in vaccine attitudes between the fact-checking label and control conditions. Further, we compared perceived expertise and trustworthiness of the five categories of fact-checking sources. Fact-checking labels attached to misinformation posts made vaccine attitudes more positive compared to the misinformation control condition (P = .003, Cohen's d= 0.21). Conspiracy ideation moderated the effect of the labels on vaccine attitudes (P = .02). Universities and health institutions were rated significantly higher on source expertise than other sources. Mediation analyses showed labels attributed to universities and health institutions indirectly resulted in more positive attitudes than other sources through perceived expertise. Exposure to fact-checking labels on misinformation can generate more positive attitudes toward vaccines in comparison to exposure to misinformation. Incorporating labels from trusted universities and health institutions on social media platforms is a promising direction for addressing the vaccine misinformation problem. This points to the necessity for closer collaboration between public health and research institutions and social media companies to join efforts in addressing the current misinformation threat.
How do the reasons people post misinformation affect how they respond to fact checking interventions? In this research, we conducted a qualitative study of people who shared misinformation. We started with stories marked as false by a popular fact checker, Snopes, and identified people who posted those stories on Reddit. We interviewed the posters about the story they shared and their five behaviorally distinct personas: Reason to Disagree, Changed Belief, Steadfast Non-Standard Belief, Sharing to Debunk, and Sharing for Humor. Our findings suggest that research to craft better interventions to counter misinformation might benefit from tailoring to specific personas that can serve as design tools for on-going misinformation intervention research.
The COVID-19 pandemic has prompted social media platforms to take unprecedented steps—ranging from false tags to journalistic factchecks—to stanch the flow of misinformation that could pose a health risk. However, there is little evidence about the relative efficacy of these approaches in this unique context of a pandemic. Using a pair of survey experiments, we examine whether false tags and journalistic factchecks reduce accuracy misperceptions and sharing propensity on social media that can spread false claims. False tags had little effect on subjects’ accuracy assessments and social media sharing. Journalistic factchecks that offer accurate information to counter misinformation were more effective in reducing both misperceptions and sharing on social media. Further, we find no evidence of partisan backfire effects, even in response to interventions against claims with a plausible partisan valence. Our results suggest that journalistic factchecks provide an effective counternarrative to COVID-19 misinformation even in the context of the increasing politicization of America’s pandemic response and polarization more generally.
Misinformation makes democratic governance harder, especially in developing countries. Despite its real-world import, little is known about how to combat misinformation outside of the United States, particularly in places with low education, accelerating Internet access, and encrypted information sharing. This study uses a field experiment in India to test the efficacy of a pedagogical intervention on respondents’ ability to identify misinformation during the 2019 elections ( N = 1,224). Treated respondents received hour-long in-person media literacy training in which enumerators discussed inoculation strategies, corrections, and the importance of verifying misinformation, all in a coherent learning module. Receiving this hour-long media literacy intervention did not significantly increase respondents’ ability to identify misinformation on average. However, treated respondents who support the ruling party became significantly less able to identify pro-attitudinal stories. These findings point to the resilience of misinformation in India and the presence of motivated reasoning in a traditionally nonideological party system.
The prevalence of misinformation within social media and online communities can undermine public security and distract attention from important issues. Fact-checking interventions, in which users cite fact-checking websites such as Snopes.com and FactCheck.org, are a strategy users can employ to refute false claims made by their peers. While laboratory research suggests such interventions are not effective in persuading people to abandon false ideas, little work considers how such interventions are actually deployed in real-world conversations. Using approximately 1,600 interventions observed on Twitter between 2012 and 2013, we examine the contexts and consequences of fact-checking interventions.We focus in particular on the social relationship between the individual who issues the fact-check and the individual whose facts are challenged. Our results indicate that though fact-checking interventions are most commonly issued by strangers, they are more likely to draw user attention and responses when they come from friends. Finally, we discuss implications for designing more effective interventions against misinformation.
… These interventions warn users about misinformation or false news at the time when they are exposed to a headline and are intended to help readers notice false information as soon as …
During the COVID-19 pandemic, the World Health Organization provided a checklist to help people distinguish between accurate and misinformation. In controlled experiments in the United States and Germany, we investigated the utility of this ordered checklist and designed an interactive version to lower the cost of acting on checklist items. Across interventions, we observe non-trivial differences in participants’ performance in distinguishing accurate and misinformation between the two countries and discuss some possible reasons that may predict the future helpfulness of the checklist in different environments. The checklist item that provides source labels was most frequently followed and was considered most helpful. Based on our empirical findings, we recommend practitioners focus on providing source labels rather than interventions that support readers performing their own fact-checks, even though this recommendation may be influenced by the WHO’s chosen order. We discuss the complexity of providing such source labels and provide design recommendations.
Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of \fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ine ective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.
The widespread dissemination of misinformation in social media has recently received a lot of attention in academia. While the problem of misinformation in social media has been intensively studied, there are seemingly different definitions for the same problem, and inconsistent results in different studies. In this survey, we aim to consolidate the observations, and investigate how an optimal method can be selected given specific conditions and contexts. To this end, we first introduce a definition for misinformation in social media and we examine the difference between misinformation detection and classic supervised learning. Second, we describe the diffusion of misinformation and introduce how spreaders propagate misinformation in social networks. Third, we explain characteristics of individual methods of misinformation detection, and provide commentary on their advantages and pitfalls. By reflecting applicability of different methods, we hope to enable the intensive research in this area to be conveniently reused in real-world applications and open up potential directions for future studies.
… detect the misinformation reaching to a specific user. Therefore, we study a τ-Monitor Placement problem for cases where partial knowledge of misinformation … a MSMN algorithm, shown …
Recently, the use of social networks such as Facebook, Twitter, and Sina Weibo has become an inseparable part of our daily lives. It is considered as a convenient platform for users to share personal messages, pictures, and videos. However, while people enjoy social networks, many deceptive activities such as fake news or rumors can mislead users into believing misinformation. Besides, spreading the massive amount of misinformation in social networks has become a global risk. Therefore, misinformation detection (MID) in social networks has gained a great deal of attention and is considered an emerging area of research interest. We find that several studies related to MID have been studied to new research problems and techniques. While important, however, the automated detection of misinformation is difficult to accomplish as it requires the advanced model to understand how related or unrelated the reported information is when compared to real information. The existing studies have mainly focused on three broad categories of misinformation: false information, fake news, and rumor detection. Therefore, related to the previous issues, we present a comprehensive survey of automated misinformation detection on (i) false information, (ii) rumors, (iii) spam, (iv) fake news, and (v) disinformation. We provide a state-of-the-art review on MID where deep learning (DL) is used to automatically process data and create patterns to make decisions not only to extract global features but also to achieve better results. We further show that DL is an effective and scalable technique for the state-of-the-art MID. Finally, we suggest several open issues that currently limit real-world implementation and point to future directions along this dimension.
The credibility of information in social networks has attracted a lot of interest due to its important role in spreading information. We argue that the quality of information or objects created in social networks can be analyzed by using their provenance data. In this paper, we propose an algorithm that assesses the credibility of information on social networks to detect the propagation of fake or malicious information. To test the usability of the proposed algorithm, we introduce a prototype implementation and discuss it in detail. We test the prototype software on a large-scale synthetic social provenance dataset. The initial results are promising.
Abstract Along with the development of the Internet, the emergence and widespread adoption of the social media concept have changed the way news is formed and published. News has become faster, less costly and easily accessible with social media. This change has come along with some disadvantages as well. In particular, beguiling content, such as fake news made by social media users, is becoming increasingly dangerous. The fake news problem, despite being introduced for the first time very recently, has become an important research topic due to the high content of social media. Writing fake comments and news on social media is easy for users. The main challenge is to determine the difference between real and fake news. In this paper, a two-step method for identifying fake news on social media has been proposed, focusing on fake news. In the first step of the method, a number of pre-processing is applied to the data set to convert un-structured data sets into the structured data set. The texts in the data set containing the news are represented by vectors using the obtained TF weighting method and Document-Term Matrix. In the second step, twenty-three supervised artificial intelligence algorithms have been implemented in the data set transformed into the structured format with the text mining methods. In this work, an experimental evaluation of the twenty-three intelligent classification methods has been performed within existing public data sets and these classification models have been compared depending on four evaluation metrics.
… are considered as the fitness function in our novel GA based fake news detection algorithm. In our proposed algorithm, SVM and LR classifiers both achieved 61% accuracy in LIAR …
In the current social media era, people are sharing some pieces of information about different types among each other using various social media platforms. This type of available information is not authentic and reliable so-called misinformation. Nowadays, Detection of misinformation regained large attention among researchers. Misinformation detection is related to the text classification problem and connects the content level of news articles with the detection analysis based on some Machine Learning algorithms like Naive Bayes and Support Vector Machine etc. In the specific domain analysis, labeled data based on reliability domain is rarely available. Previous research work relied on news articles collected from so-called reputable and suspicious websites and labeled accordingly. We leverage fact-checking websites to collect individually-labeled news articles with regard to the veracity of their content and use this data to test the cross-domain generalization of a classifier trained on bigger text collections but labeled according to source reputation. This paper provides a comprehensive survey of misinformation and its detection using various social media platforms. Future directions for research have also been also discussed in this research article. Therefore collecting well-balanced and carefully-assessed training data is a priority for developing robust misinformation detection systems in the future.
The growth of social networks platforms leverages the consumption of news due to its easy access, spreading behavior, and low cost. However, this revolution in the way that information is released has provided the growth of something that always walked side by side with the real news: we are talking about fake news. After the 2016 U.S. presidential election this term became more popular and dangerous because of its negative effect on society. In this context, recent contributions has appeared addressing several related topics, such as spreading behavior, methods for spreading contention, and fake news detection algorithms. Despite of the growth of this type of research, it is difficult for a researcher to identify the current state-of-the-art literature about fake news detection. To overcome this obstacle, this paper presents a systematic review of the literature that brings an overview of this research area and analyzes the the high-quality studies about fake news detection. Through this systematic literature review, more than 6,000 articles were found according to our search protocol. Then, we put these studies through stages of screening to ensure that they were quality assessed. Were elected 32 high-quality studies according to our PRISMA flow diagram defined in this paper. These studies were then categorized by their contribution type and algorithm. This work shown that Twitter and Weibo1 are the social media platform most applied by selected studies, and deep learning algorithms given the best detection results, specially LSTM. Besides, this SR exposes the lack of research for fake news detection in other language than english. Finally, we expect this study can help researchers identify the greatest contributions as well as research opportunities.
Never happened before in human history the spreading of fake news; now, the development of the Worldwide Web and the adoption of social media have given a pathway for people to spread misinformation to the world. Everyone is using the Internet, creating and sharing content on social media, but not all the information is valid, and no one is verifying the originality of the content. Identifying the content's essence is sometimes complicated for researchers and intelligent researchers. For example, during Covid-19, misinformation spread worldwide about the outbreak, and much false information spread faster than the virus. This misinformation will create a problem for the public and mislead people into taking the proper medicine. This work will help us to improve the prediction rate. Here we investigate the ability of machine learning classifiers and deep learning models: Naive Bayes, Logistic Regression, Support Vector Machine, Decision Tree, Random Forest and K-Nearest Neighbor. Deep learning models include Convolutional Neural Networks and Long Short-Term Memory (LSTM). The various types of machine learning and deep learning models will be trained and tested using the Covid-19 dataset (1,375,592 tweets).
… a method to obtain ground truth for spreader detection based on the suspension list, where data distribution is in line with the real world. Since mining misinformation in a social network …
Social media has become one of the main channels for people to access and consume news, due to the rapidness and low cost of news dissemination on it. However, such properties of social media also make it a hotbed of fake news dissemination, bringing negative impacts on both individuals and society. Therefore, detecting fake news has become a crucial problem attracting tremendous research effort. Most existing methods of fake news detection are supervised, which require an extensive amount of time and labor to build a reliably annotated dataset. In search of an alternative, in this paper, we investigate if we could detect fake news in an unsupervised manner. We treat truths of news and users’ credibility as latent random variables, and exploit users’ engagements on social media to identify their opinions towards the authenticity of news. We leverage a Bayesian network model to capture the conditional dependencies among the truths of news, the users’ opinions, and the users’ credibility. To solve the inference problem, we propose an efficient collapsed Gibbs sampling approach to infer the truths of news and the users’ credibility without any labelled data. Experiment results on two datasets show that the proposed method significantly outperforms the compared unsupervised methods.
With the mobile Internet development, e-health has become increasingly connected with people’s daily life. However, health information on Internet is severely corrupted by misinformation, especially for the aged. It is necessary to analyze the characteristics of health-related misinformation on Internet and to design automated detection tools. In this study, we focus on analyzing common characteristics of reliable and unreliable health-related information on Chinese online social media, and exploring possible detection method using machine learning algorithms. We first collect a dataset containing both reliable and unreliable health-related articles from multiple Chinese online social media sites, with 2,296 reliable and 2,085 unreliable included. Then we analyze their differences with respect to writing style, text topic and feature distribution by both intuitive and statistical analysis. We also manually select 104 linguistic and statistical features that are useful for machine learning classifiers. Lastly, we propose a Health-related Misinformation Detection framework (HMD) that includes a feature-based method and a text-based method for detecting unreliable health-related information. Experiments verifies the performance of our proposed HMD method.
Deceptive content is becoming increasingly dangerous, such as fake news created by social media users. Individuals and society have been affected negatively by the spread of low-quality news on social media. The fake and real news needs to be detected to eliminate the disadvantages of social media. This paper proposes a novel approach for fake news detection (FND) problem on social media. Applying this approach, FND problem has been considered as an optimization problem for the first time and two metaheuristic algorithms, the Grey Wolf Optimization (GWO) and Salp Swarm Optimization (SSO) have been adapted to the FND problem for the first time as well. The proposed FND approach consists of three stages. The first stage is data preprocessing. The second stage is adapting GWO and SSO for construction of a novel FND model. The last stage consists of using proposed FND model for testing. The proposed approach has been evaluated using three different real-world datasets. The results have been compared with seven supervised artificial intelligence algorithms. The results show GWO algorithm has the best performance in comparison with SSO algorithm and the other artificial intelligence algorithms. GWO seems to be efficiently used for solving different types of social media problems.
The paper explores the use of concepts in cognitive psychology to evaluate the spread of misinformation, disinformation and propaganda in online social networks. Analysing online social networks to identify metrics to infer cues of deception will enable us to measure diffusion of misinformation. The cognitive process involved in the decision to spread information involves answering four main questions viz consistency of message, coherency of message, credibility of source and general acceptability of message. We have used the cues of deception to analyse these questions to obtain solutions for preventing the spread of misinformation. We have proposed an algorithm to effectively detect deliberate spread of false information which would enable users to make informed decisions while spreading information in social networks. The computationally efficient algorithm uses the collaborative filtering property of social networks to measure the credibility of sources of information as well as quality of news items. The validation of the proposed methodology has been done on the online social network `Twitter’.
… We compare our solution with other three algorithms with T varying from 2 to 14. In all sub-figures in Fig. 6, transmission delays follow the power-law distribution and 'f/ = 0.20. …
The fast expanding of social media fuels the spreading of misinformation which disrupts people's normal lives. It is urgent to achieve goals of misinformation identification and early detection in social media. In dynamic and complicated social media scenarios, some conventional methods mainly concentrate on feature engineering which fail to cover potential features in new scenarios and have difficulty in shaping elaborate high-level interactions among significant features. Moreover, a recent Recurrent Neural Network (RNN) based method suffers from deficiencies that it is not qualified for practical early detection of misinformation and poses a bias to the latest input. In this paper, we propose a novel method, Convolutional Approach for Misinformation Identification (CAMI) based on Convolutional Neural Network (CNN). CAMI can flexibly extract key features scattered among an input sequence and shape high-level interactions among significant features, which help effectively identify misinformation and achieve practical early detection. Experiment results on two large-scale datasets validate the effectiveness of CAMI model on both misinformation identification and early detection tasks.
Social media users are more likely to be exposed to similar views and tend to avoid contrasting views, especially when they are part of a community of social media users. In this study, we investigate the presence of user communities and leverage them as a tool to detect misinformation on social media, specifically on X (formerly known as Twitter). We propose a misinformation detection framework, namely Similarity-based Misinformation Detection (SiMiD) that employs microblogs and utilizes user-follower interactions within a social network. Our approach extracts important textual features of social media posts using a transformer-based language model. We use contrastive learning and pseudo-labeling to fine-tune the language model. Then, we measure the similarity for each social media post based on its relevance to each user in the communities. Finally, we train a machine learning model to identify the truthfulness of social media posts using these similarity scores. We evaluate our approach on three social media datasets, compare our method with twelve state-of-the-art approaches, and answer five research questions. The experimental results, supported by statistical tests, show that contrastive learning and user communities can enhance the detection of misinformation on social media. Our model can identify misinformation content by achieving a consistently high weighted F1 score of over 90% across all datasets, even employing only a small number of users in communities. We make our implementations publicly available and provide all details that are necessary for the reproducibility of experiments.1
… for detecting fake news and jointly learns about users’ flagging accuracy over time. Our algorithm employs … and show the power of leveraging community signals for fake news detection. …
合并后的统一分组将虚假信息传播研究梳理为并列的九个方向:①传播机制与动力学扩散建模(含流行病学/博弈/真实趋势);②治理干预(博弈控制、平台/算法监管与效果评估);③传播治理综述与工具化框架;④在线检测识别(基于传播信号与监督/无监督);⑤机器学习/深度学习与领域化检测;⑥扩散趋势识别与传播预测;⑦因果去偏与推荐/网络偏置;⑧溯源与传播路径追踪;⑨生成式AI驱动传播链与治理困境;同时保留了概念界定任务框架与数据集/框架支撑两类基础要素。