虚假信息的概念界定和类型划分
核心概念界定与分类学理论体系
该组文献致力于从理论层面界定虚假信息的内涵,区分误导性信息(Misinformation)、蓄意虚假信息(Disinformation)及恶意信息(Malinformation)等核心概念。研究者们探讨了“信息紊乱”框架的演进,并尝试根据意图、影响、隐喻和社会控制论等维度建立多维度的分类模型或分类法。
- Numbers Do Not Lie: A Bibliometric Examination of Machine Learning Techniques in Fake News Research(Andra Sandu, Ioana Ioanăș, Camelia Delcea, M. Florescu, Liviu-Adrian Cotfas, 2024, Algorithms)
- The Spread Mechanism of Misinformation on Social Media(Panpan Chen, 2025, Journal of Computer Technology and Electronic Research)
- “Disinformation Aims to Mislead; Misinformation Thrives in Ignorance”: Insights from Experts and Non-Experts in Greek-Speaking Cyprus(Loukia Taxitari, Thanos Sitistas, Eleni Gavriil, 2025, Social Sciences)
- Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities(P. Meel, D. Vishwakarma, 2020, Expert Syst. Appl.)
- Toward a taxonomy of newspaper information quality: An experimental model and test applied to Venezuela dimensions found in information quality(Luis M. Romero-Rodríguez, Ignacio Aguaded, 2017, Journalism)
- A social informatics perspective on misinformation, disinformation, deception and conflict(N. Hara, Pnina Fichman, E. Meyer, Yimin Chen, Soo Young Rieh, 2019, Proceedings of the Association for Information Science and Technology)
- Types Of Manipulation In Online Media And Social Networks(Viktotiya Shevchenko, 2025, Obraz)
- Conceptualizing the Media Ecosystem: Addressing Misinformation, Disinformation, Fake News, and Deepfakes — Key Insights from Interviews with Professional Journalists(Anastasiia Iufereva, 2025, European Conference on Knowledge Management)
- Facts, values, and the epistemic authority of journalism: How journalists use and define the terms fake news, junk news, misinformation, and disinformation(Johan Farkas, Sabina Schousboe, 2024, Nordicom Review)
- Deciphering misinformation and disinformation: insights from structural coupling and penetration(Yj Sohn, Heidi Hatfield Edwards, T. Petersen, 2024, Kybernetes)
- Look at what the real facts and experts say! The use of expert references and objectivity claims in disinformation: A qualitative exploration and typology(M. Hameleers, E. van der Goot, 2024, Journalism)
- Conceptual evolution of information disorder(Ruslana Margova, 2025, Papers from the International Scientific Conference of the European Studies Department, Jean Monnet Centre of Excellence, Faculty of Philosophy at Sofia University “St. Kliment Ohridski”)
- A typology of disinformation intentionality and impact(Aaron M. French, V. Storey, Linda G. Wallace, 2023, Information Systems Journal)
- ’The Devil Goes by Many Names’: A Critical Examination of Propaganda, PR, and Fake News as Forms of Information Disorder(Holger Pötzsch, Christina Lentz, 2024, Panoptikum)
- Combating contamination and contagion: Embodied and environmental metaphors of misinformation(Yvonne M. Eadon, Stacy Wood, 2024, Convergence)
- ‘Fake news’ is the invention of a liar: How false information circulates within the hybrid news system(Fabio Giglietto, L. Iannelli, A. Valeriani, L. Rossi, 2018, Current Sociology)
- To Examine Misinformation, Disinformation And Malinformation Responsible For Information Disorder In The Society– A Pilot Study(A. Mukhopadhyay, Dr Jigar Shah, 2022, Journal for ReAttach Therapy and Developmental Diversities)
- Devising a framework for assessing the subjectivity and objectivity of information taxonomy projects(Francie Alexander, 2014, J. Documentation)
生成式人工智能与数字伪造技术的挑战
这组文献聚焦于新兴技术(如生成式AI、大语言模型、深度伪造)对虚假信息生态的重塑。研究涵盖了合成现实(Synthetic Realities)带来的识别挑战、AI生成内容的自动化创作风险,以及这些技术在制造欺诈、刻板印象和系统性损害方面的负面影响。
- Critical Intersections: AI Misinformation, Fact-Checking, Platform Dynamics, and Cultural Shifts(Vivian Hsueh Hua Chen, 2025, Emerging Media)
- Social Risks and Public Opinion Governance in the AIGC Era: From the Generation of False Content to Algorithmic Response Mechanisms(Qin Li, Xingnian Zhang, 2025, Scientific Journal of Economics and Management Research)
- Detection of Fake News in Romanian: LLM-Based Approaches to COVID-19 Misinformation(Alexandru Dima, Ecaterina Ilis, Diana Florea, Mihai Dascălu, 2025, Inf.)
- TRANSFORMATIVE ROLE OF ARTIFICIAL INTELLIGENCE IN GLOBAL COMMUNICATION: MINIMISING MISINFORMATION, DISINFORMATION, CULTURAL DIVERSITY AND FOSTERING GLOBAL UNDERSTANDING(Babajide Adeyinka Joseph, 2024, Lagos Journal of Contemporary Studies in Education)
- Fakes as a factor of modern information and communication conditions(S. Kolobova, 2025, Litera)
- FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media(Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, Huan Liu, 2018, Big Data)
- Misinformation, Disinformation, and Generative AI: Implications for Perception and Policy(Kokil Jaidka, Tsuhan Chen, Simon Chesterman, W. Hsu, Min-Yen Kan, Mohan Kankanhalli, Mong Li Lee, Gyula Seres, Terence Sim, Araz Taeihagh, Anthony K. H. Tung, Xiaokui Xiao, Audrey Yue, 2024, Digital Government: Research and Practice)
- AI-Generated Misinformation: A Case Study on Emerging Trends in Fact-Checking Practices Across Brazil, Germany, and the United Kingdom(Regina Cazzamatta, Aynur Sarısakaloğlu, 2025, Emerging Media)
- The Age of Synthetic Realities: Challenges and Opportunities(J. P. Cardenuto, Jing Yang, Rafael Padilha, Renjie Wan, Daniel Moreira, Haoliang Li, Shiqi Wang, Fernanda Andal'o, Sébastien Marcel, Anderson Rocha, 2023, ArXiv)
- Misinformation, Fraud, and Stereotyping: Towards a Typology of Harm Caused by Deepfakes(Paulina Trifonova, Sukrit Venkatagiri, 2024, Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing)
虚假信息的计算检测与自动化识别技术
此类文献侧重于方法论和技术实现,利用机器学习、深度学习(如混合神经网络)、自然语言处理(NLP)和网络拓扑分析(如链接预测)来自动识别社交媒体上的虚假信息。研究探讨了从内容、社交关系和时间维度进行自动化监测与追踪的路径。
- Control of false information in social networks: A short review of communication mechanisms and management strategies(Zhenglin Liang, 2024, Applied and Computational Engineering)
- Automated Hybrid Deep Neural Network Model for Fake News Identification and Classification in Social Networks(Roshan R. Karwa, Sunil Gupta, 2023, International Journal of Automation and Smart Technology)
- Taxonomy of Link Prediction for Social Network Analysis: A Review(Herman Yuliansyah, Z. Othman, A. Bakar, 2020, IEEE Access)
- Mapping the Landscape of Misinformation Detection: A Bibliometric Approach(Andra Sandu, Ioana Ioanăș, Camelia Delcea, Laura-Mădălina Geantă, Liviu-Adrian Cotfas, 2024, Inf.)
- Fake news research trends, linkages to generative artificial intelligence and sustainable development goals(R. Raman, Vinith Kumar Nair, Prema Nedungadi, Aditya Kumar Sahu, Robin Kowalski, S. Ramanathan, K. Achuthan, 2024, Heliyon)
- The Veracity Problem: Detecting False Information and its Propagation on Online Social Media Networks(Sarah Condran, 2024, Proceedings of the 33rd ACM International Conference on Information and Knowledge Management)
- Studying Fake News via Network Analysis: Detection and Mitigation(Kai Shu, H. Bernard, Huan Liu, 2018, ArXiv)
- Systematic Review of Fake News, Propaganda, and Disinformation: Examining Authors, Content, and Social Impact Through Machine Learning(D. Plikynas, Ieva Rizgelienė, Gražina Korvel, 2025, IEEE Access)
- Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society(Andrea Tomassi, Andrea Falegnami, Elpidio Romano, 2024, PLOS ONE)
- Fake news, disinformation and misinformation in social media: a review(Esma Aïmeur, Sabrine Amri, Gilles Brassard, 2023, Social Network Analysis and Mining)
- Features of fake news in Chinese social networks(Yihua Yang, 2025, Litera)
- Network typology, information sources, and messages of the infodemic twitter network under COVID‐19(Miyoung Chong, 2020, Proceedings of the Association for Information Science and Technology. Association for Information Science and Technology)
特定领域(医疗、科学与公共危机)的虚假信息传播
这组文献以COVID-19大流行、疫苗接种或自然灾害为背景,分析特定高风险领域中“信息疫情”(Infodemic)的传播规律。研究探讨了科学知识被去脉络化或恶意篡改的过程,以及虚假信息如何影响公众的健康信念、防疫行为和公共卫生政策。
- Fact-Checking the Crisis: COVID-19, Infodemics, and the Platformization of Truth(Kelley Cotter, J. DeCook, Shaheen Kanthawala, 2022, Social Media + Society)
- Journalistic Fact-Checking of Information in Pandemic: Stakeholders, Hoaxes, and Strategies to Fight Disinformation during the COVID-19 Crisis in Spain(Xosé López-García, C. Costa-Sánchez, Á. Vizoso, 2021, International Journal of Environmental Research and Public Health)
- Identifying Frames of the COVID-19 Infodemic: Thematic Analysis of Misinformation Stories Across Media(Ehsan Mohammadi, I. Tahamtan, Yazdan Mansourian, H. Overton, 2021, JMIR Infodemiology)
- AraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News & Hate Speech Detection Dataset(Mohamed Seghir Hadj Ameur, H. Aliane, 2021, No journal)
- Unpacking Misinfodemic During a Global Health Crisis: A Qualitative Inquiry of Psychosocial Characteristics in Social Media Interactions(Sofia Olivares, Sahiti Myneni, 2022, Studies in health technology and informatics)
- Disinformation about COVID-19 in Ibero-America: An Analysis of Fact Checkers(L. Massarani, Amanda Medeiros, Igor Waltz, Tatiane Leal, 2023, TSN. Transatlantic Studies Network)
- Disinformation and Fact-Checking in the Face of Natural Disasters: A Case Study on Turkey–Syria Earthquakes(Sandra Méndez-Muros, Marián Alonso-González, Concha Pérez-Curiel, 2024, Societies)
- Health and science-related disinformation on COVID-19: A content analysis of hoaxes identified by fact-checkers in Spain(Bienvenido León, María-del-Pilar Martínez-Costa, Ramón Salaverría, I. López-Goñi, 2022, PLoS ONE)
- Where to find accurate information on attention-deficit hyperactivity disorder? A study of scientific distortions among French websites, newspapers, and television programs(Sébastien Ponnou, H. Haliday, F. Gonon, 2020, Health:)
- Fact-Checking Journalism: A Palliative Against the COVID-19 Infodemic in Ibero-America(Luis F. Martínez-García, Iliana Ferrer, 2023, Journalism & Mass Communication Quarterly)
- Public Reason in Times of Corona: Countering Disinformation in the Netherlands(M. Buijsen, 2025, Cambridge Quarterly of Healthcare Ethics)
- Science and sanity: A social epistemology of misinformation, disinformation, and the limits of knowledge(Laurence J. Kirmayer, 2024, Transcultural Psychiatry)
- Global utilization of online information for substance use disorder: An infodemiological study of Google and Wikipedia from 2004 to 2022.(Rowalt C. Alibudbud, J. Cleofas, 2022, Journal of nursing scholarship : an official publication of Sigma Theta Tau International Honor Society of Nursing)
- The Impact of Misinformation and Health Literacy on HIV Prevention and Service Usage(Renee Garett, S. Young, 2021, Journal of the Association of Nurses in AIDS Care)
- Misinformation and Disinformation on TikTok(Yitong Yu, 2025, Academic Journal of Management and Social Sciences)
- Fake news, misinformation, vaccine hesitancy and the role of community engagement in COVID-19 vaccine acceptance in Southern Ghana(Mawulom Kuatewo, W. Ebelin, P. Doegah, M. Aberese-Ako, S. Lissah, A. G. Kpordorlor, L. Kpodo, S. Djokoto, E. Ansah, 2025, PLOS One)
- Predatory marketing and false health promotion on social media: risk pathways in diet, fitness, and supplement communication(Youjing Huang, Xinchen Leng, Zirong Tian, 2026, Frontiers in Public Health)
- Health Misinformation and social media: Analysing the effects of fake news content on public behaviour in Pakistan(Muhammad Uzair Saqib, Sayra Hussain, 2025, Journal of Creative Arts and Communication (JCAC))
- Why do experts disagree? The development of a taxonomy(K. Deroover, Simon Knight, Paul F. Burke, T. Bucher, 2022, Public Understanding of Science)
- Tweeting Stigma: An Exploration of Twitter Discourse Regarding Medications Used for Both Opioid Use Disorder and Chronic Pain(Patricia Dekeseredy, C. Sedney, Bayan Razzaq, Treah Haggerty, H. Brownstein, 2021, Journal of Drug Issues)
- No one is immune to misinformation: An investigation of misinformation sharing by subscribers to a fact-checking newsletter(L. L. Saling, Devi Mallal, Falk Scholer, Russel Skelton, Damiano Spina, 2021, PLoS ONE)
事实核查机制、平台干预与治理效能评估
这一组文献关注应对虚假信息的实践手段,包括事实核查(Fact-checking)的认识论、职业标准与社会功能。研究评估了社交平台干预措施(如社区附注、众包治理)的效能,以及用户对纠正信息的信任度、抵制心理及其背后的民主价值构建。
- Fake news and fact-checking: Combating misinformation and disinformation in Canadian newsrooms and journalism schools(Brooks DeCillia, Brad Clark, 2023, Facts & Frictions: Emerging debates, pedagogies and practices in contemporary journalism)
- Understanding the shifting nature of fake news research: Consumption, dissemination, and detection(Rona Nisa Sofia Amriza, Tzu-Chuan Chou, Wiwit Ratnasari, 2025, Journal of the Association for Information Science and Technology)
- Changing landscape of fake news research on social media: a bibliometric analysis(Abdelkebir Sahid, Yassine Maleh, Karim Ouazzane, 2025, Quality & Quantity)
- Crowdsourcing the Mitigation of disinformation and misinformation: The case of spontaneous community-based moderation on Reddit(Giulio Corsi, Elizabeth Seger, Seán Ó hÉigeartaigh, 2024, Online Soc. Networks Media)
- Combating Misinformation/ Disinformation in Online Social Media: A Multidisciplinary View(M. Barni, Y. Fang, Yuhong Liu, Laura Robinson, K. Sasahara, Subramaniam Vincent, Xinchao Wang, Zhizheng Wu, 2022, APSIPA Transactions on Signal and Information Processing)
- Epistemology of Fact Checking: An Examination of Practices and Beliefs of Fact Checkers Around the World(Michael Koliska, Jessica Roberts, 2024, Digital Journalism)
- The performance of truth: politicians, fact-checking journalism, and the struggle to tackle COVID-19 misinformation(M. Luengo, David García-Marín, 2020, American Journal of Cultural Sociology)
- DISINFORMATION, FACT-CHECKING AND USER CONTENT ON THE EXAMPLE OF THE SITE FAKTCHECKER.UZ OF THE REPUBLIC OF UZBEKISTAN(Nargis Sunnat Kizi Кosimova, 2025, Dynamics of Media Systems)
- Perception of Credibility of Fact-Checking Platforms among Croatian Citizens(Tomislav Levak, Paula Rem, Matea Krauz, 2025, Media & Marketing Identity)
- “We Follow the Disinformation”: Conceptualizing and Analyzing Fact-Checking Cultures Across Countries(Daniela Mahl, Jing Zeng, Mike S. Schäfer, Fernando Antonio Egert, Thaiane Oliveira, 2024, The International Journal of Press/Politics)
- Community Notes Moderate Engagement With and Diffusion of False Information Online(Isaac Slaughter, A. Peytavin, Johan Ugander, Martin Saveski, 2025, ArXiv)
- Content tracing: examining fact-checking via a WhatsApp group during the COVID-19 pandemic(Edson C. Tandoc, Seth Seet, Weng Wai Mak, Ker Hian Lua, 2024, Behaviour & Information Technology)
- What Is the Problem with Misinformation? Fact-checking as a Sociotechnical and Problem-Solving Practice(Oscar Westlund, Valérie Bélair-Gagnon, Lucas Graves, Rebekah Larsen, Steen Steensen, 2024, Journalism Studies)
- Challenges for Fact-checking: Beyond False/True Verification(Angeliki Monnier, Céline Ségur, 2025, InMedia)
- Why Do Social Media Users Accept, Doubt or Resist Corrective Information? A Qualitative Analysis of Comments in Response to Corrective Information on Social Media(M. Hameleers, 2024, Journalism Studies)
社会心理动机、跨文化差异与群体易受影响性
该组文献从受众和社会学视角出发,探讨用户分享虚假信息的心理动机(如确认偏差、数字行动主义)以及在不同文化背景(如中国、爱沙尼亚、孟加拉国)和特定社会冲突(如种族宗教冲突、反女权主义)中的差异化表现,强调了社会信任机制对信息感知的影响。
- Rumors, Fakes and Fact-checking in Chinese Social Media: The Case of WeChat(Yihua Yang, Aleksandr Anatol'evich Grabel'nikov, 2025, Филология: научные исследования)
- “We Never Really Talked About politics”: Race and Ethnicity as Foundational Forces Structuring Information Disorder Within the Vietnamese Diaspora(Sarah Nguyễn, Rachel E. Moran, Trung-Anh H. Nguyen, Linh Bui, 2023, Political Communication)
- Susceptibility of the Estonian Russian-speaking Audience to the Spread of Fake News and Information Disorder in the News Media(Mihhail Kremez, 2023, Central European Journal of Communication)
- Dissemination, Situated Fact-checking, and Social Effects of Misinformation among Rural Bangladeshi Villagers During the COVID-19 Pandemic(S. Sultana, Susan R. Fussell, 2021, Proceedings of the ACM on Human-Computer Interaction)
- The legitimation of screenshots as visual evidence in social media: YouTube videos spreading misinformation and disinformation(Olivia Inwood, Michele Zappavigna, 2024, Visual Communication)
- Motivation for Digital Activism on Instagram Among Postgraduate Students Amidst The Impact of Misinformation, Malinformation, and Disinformation(Zaki Khudzaifi Mahmud, 2025, Journal of Social Research)
- The “what” and “why” of fake news: an in-depth qualitative investigation of young consumers(Divyaneet Kaur, S. Kushwah, A. Sharma, 2025, Qualitative Market Research: An International Journal)
- Attitudes towards Fake News from the Perspective of the Experience of Adult Poles(Michał Szyszka, Arkadiusz Wąsiński, Artur Fabiś, Paweł Buchwald, 2025, Communication Today)
- Social Media Reels Usage towards Religious Minorities and Political Mind-shift in India- An In-depth Content Analysis(Annu Biswas, 2025, International Journal For Multidisciplinary Research)
- Rethinking the Misinformation with its Detrimental Impact on Lives: A Qualitative Approach(Orna Paul, S. Yesmin, 2023, Science & Technology Libraries)
- Participatory Design and Power in Misinformation, Disinformation, and Online Hate Research(Joseph S. Schafer, Kate Starbird, D. Rosner, 2023, Proceedings of the 2023 ACM Designing Interactive Systems Conference)
- Misinformation and disinformation in ethno-religious conflicts: a comparative study of media in Ghana and Nigeria(R. M. Adisa, S. K. Segbefia, Sadiq Mohammed, G. N. Trofimova, 2024, RUDN Journal of Studies in Literature and Journalism)
- Misinformation as woman: anti-feminism, news media, and disinformation’s feminized other(Leigh A. Goldstein, Meenasarani Linde Murugan, 2024, Feminist Media Studies)
- Fake News in the Digital Age: The Role of Social Media and its Consequences(Dr. Divyshikha Bharati Vidyapeeth, Ms. Priyanka Singh, Mukka Rajender, Singh Thakur, Dr. Adarsh Kumar Singh, Mr. Pushpendra Sachan, Mr. Jayant Rathee, 2025, 2025 IEEE DELCON - International Conference on Recent Smart Technologies in Engineering for Sustainable Development)
广义社会科学视角下的分类偏误与社会服务研究
这组文献涉及心理健康、特殊教育及医疗分类中的系统性偏差。虽然不直接讨论虚假信息,但它们展示了分类体系的偏误、跨文化误解或数据不公如何导致“事实上的失实”,为理解虚假信息的社会根源提供了更广阔的背景。
- On the concept, taxonomy, and transculturality of disordered grief(Afonso Gouveia, 2024, Frontiers in Psychology)
- Racial inequity in methadone dose at delivery in pregnant women with opioid use disorder.(E. Rosenthal, Vanessa L. Short, Yuri Cruz, C. Barber, J. Baxter, Diane J. Abatemarco, Amanda Roman, Dennis J. Hand, 2021, Journal of substance abuse treatment)
- Gender differences in response to war-related trauma and posttraumatic stress disorder – a study among the Congolese refugees in Uganda(Herbert E. Ainamani, T. Elbert, D. K. Olema, Tobias Hecker, 2020, BMC Psychiatry)
- Depression, anxiety, and post-traumatic stress disorder among youth in low and middle income countries: A review of prevalence and treatment interventions.(S. Yatham, Shalini Sivathasan, R. Yoon, Tricia L. da Silva, A. Ravindran, 2017, Asian journal of psychiatry)
- Service deserts and service oases: Utilizing geographic information systems to evaluate service availability for individuals with autism spectrum disorder(A. Drahota, R. Sadler, C. Hippensteel, Brooke Ingersoll, Lauren Bishop, 2020, Autism)
- Parents’ strategies for home educating their children with Autism Spectrum Disorder during the COVID-19 period in Zimbabwe(T. Majoko, Annah Dudu, 2020, International Journal of Developmental Disabilities)
本报告综合了虚假信息研究的多个核心维度:首先,在理论层面确立了“信息紊乱”框架下的概念界定与分类学体系;其次,深入探讨了生成式AI与深度伪造技术带来的新型安全挑战;第三,展示了利用机器学习和网络分析进行自动化检测的技术前沿;第四,通过医疗与公共卫生领域的实证研究揭示了虚假信息的社会危害;最后,从治理角度评估了事实核查机制的效能,并分析了用户分享动机与跨文化背景下的群体易感性。整体研究呈现出从理论澄清向技术治理与社会心理干预并重的跨学科演进趋势。
总计101篇相关文献
Scholars, politicians, and journalists have raised alarm over the potential for AI-generated photos, video, and audio - often referred to as deepfakes - to reduce trust in one another and our institutions. Despite these clarion calls, little empirical work exists on how deepfakes are being used to harm individuals outside of non-consensual intimate imagery (NCII). This research provides a preliminary analysis of 50 wide-ranging incidents of deepfake harm. We find that the most common types of harm are relational, systemic, financial, and emotional. Apart from AI-generated NCII, the most prevalent uses of deepfakes to cause harm were instances of mis- and disinformation, fraud, and misrepresentation of or stereotyping about marginalized groups (e.g., women and racial minorities). We concluded with recommendations for future work and discuss potential challenges in identifying, quantifying, and preventing harm caused by deepfakes both online and off.
A massive “infodemic” developed in parallel with the global COVID-19 pandemic and contributed to public misinformation at a time when access to quality information was crucial. This research aimed to analyze the science and health-related hoaxes that were spread during the pandemic with the objectives of (1) identifying the characteristics of the form and content of such false information, and the platforms used to spread them, and (2) formulating a typology that can be used to classify the different types of hoaxes according to their connection with scientific information. The study was conducted by analyzing the content of hoaxes which were debunked by the three main fact-checking organizations in Spain in the three months following WHO’s announcement of the pandemic (N = 533). The results indicated that science and health content played a prominent role in shaping the spread of these hoaxes during the pandemic. The most common hoaxes on science and health involved information on scientific research or health management, used text, were based on deception, used real sources, were international in scope, and were spread through social networks. Based on the analysis, we proposed a system for classifying science and health-related hoaxes, and identified four types according to their connection to scientific knowledge: “hasty” science, decontextualized science, badly interpreted science, and falsehood without a scientific basis. The rampant propagation and widespread availability of disinformation point to the need to foster media and scientific caution and literacy among the public and increase awareness of the importance of timing and substantiation of scientific research. The results can be useful in improving media literacy to face disinformation, and the typology we formulate can help develop future systems for automated detection of health and science-related hoaxes.
ABSTRACT Although the widespread application of corrective information has been found to lower the credibility of misinformation, there may be important sources of resistance among social media users that potentially limit the effectiveness of fact-checking, warning messages, and community-based verifications. Yet, to date, we lack an inductive and context-bound understanding of users’ responses to these different applications, and the reasons why users distrust or avoid corrections online. Against this backdrop, this paper relies on an in-depth qualitative content analysis of responses to different forms of corrective information on Facebook, Twitter, and TikTok. The study’s main findings inform a typology of resistance consisting of (1) expressing doubts on the selection biases of corrective information; (2) challenging the evidence and conclusions of corrective information; (3) blaming the correction for being biased and/or partisan and (4) labeling the correction or intervention as disinformation itself. The implications for journalism practice and content moderation are discussed.
As technology evolves rapidly and online content—particularly on social media—is widely consumed, the spread of misinformation, disinformation, fake news, and deepfakes has become a critical concern. While numerous studies recognize the severity of these phenomena and their negative societal impact, significant conceptual ambiguities persist. To address these gaps, this research integrates insights from relevant scholarly literature and in-depth interviews with professional journalists to refine the conceptual frameworks of misinformation, disinformation, fake news, and deepfakes, clarifying their distinctions. It may contribute to communication research by enhancing the conceptual understanding of key media ecosystem concepts and guiding strategies for media management and literacy development.
The emergence of generative artificial intelligence (GenAI) has exacerbated the challenges of misinformation, disinformation, and mal-information (MDM) within digital ecosystems. These multi-faceted challenges demand a re-evaluation of the digital information lifecycle and a deep understanding of its social impact. An interdisciplinary strategy integrating insights from technology, social sciences, and policy analysis is crucial to address these issues effectively. This article introduces a three-tiered framework to scrutinize the lifecycle of GenAI-driven content from creation to consumption, emphasizing the consumer perspective. We examine the dynamics of consumer behavior that drive interactions with MDM, pinpoints vulnerabilities in the information dissemination process, and advocates for adaptive, evidence-based policies. Our interdisciplinary methodology aims to bolster information integrity and fortify public trust, equipping digital societies to manage the complexities of GenAI and proactively address the evolving challenges of digital misinformation. We conclude by discussing how GenAI can be leveraged to combat MDM, thereby creating a reflective cycle of technological advancement and mitigation.
Recent challenges to scientific authority in relation to the COVID pandemic, climate change, and the proliferation of conspiracy theories raise questions about the nature of knowledge and conviction. This article considers problems of social epistemology that are central to current predicaments about popular or public knowledge and the status of science. From the perspective of social epistemology, knowing and believing are not simply individual cognitive processes but based on participation in social systems, networks, and niches. As such, knowledge and conviction can be understood in terms of the dynamics of epistemic communities, which create specific forms of authority, norms, and practices that include styles of reasoning, habits of thought and modes of legitimation. Efforts to understand the dynamics of delusion and pathological conviction have something useful to teach us about our vulnerability as knowers and believers. However, this individual psychological account needs to be supplemented with a broader social view of the politics of knowledge that can inform efforts to create a healthy information ecology and strengthen the civil institutions that allow us to ground our action in well-informed picture of the world oriented toward mutual recognition, respect, diversity, and coexistence.
No abstract available
The article herein examines the multifaceted challenges of misinformation and disinformation in the media landscape, with a focus on strategies to enhance media literacy among adults. The primary objective of this study is to examine the prevalence, characteristics, and impact of disinformation and misinformation in media within the internet society, ultimately contributing to developing targeted educational programs and policy recommendations. To achieve this, a qualitative research design was carried out to explore the views and the broader societal experiences in media-related challenges. The research design utilized thematic analysis of data collected from focus groups and expert interviews, ensuring the representation of diverse perspectives. By focusing on the information landscape in Cyprus and Greece, the present article aims to address the unique local challenges and contribute to the literature gap. The findings reveal the critical importance of tailored educational programs and the cultivation of critical thinking skills in fostering media literacy and combating false information in an effort to put together in a unique study the various experiences, opinions, and needs of individuals who seek to navigate successfully in an information-rich world.
With the rise of social media, platforms like TikTok have become key providers of information. However, the low threshold for content generation has contributed to the widespread distribution of misinformation and deception. This study investigates the contrasts between misinformation and disinformation, emphasizing on their impact during the COVID-19 pandemic. Through content analysis, the study explores how misinformation about vaccines spreads on TikTok and how the platform’s algorithm effects its reach. The data reveal that misinformation often emerges from ignorance, while disinformation is purposefully manufactured to affect public opinion. TikTok has introduced fact-checking and reporting methods, however issues remain owing to the platform's viral nature. The study comes to the conclusion that to minimize the consequences of false information and disinformation on social media, a multifarious strategy including platform control, outside monitoring, and better media literacy is required.
The shift of activism from the real world to the digital world has intensified year by year, driven by social phenomena at both national and global levels. The existence of misinformation, malinformation, and disinformation certainly distorts the general public's ability to participate in digital activism activities. This study aims to examine the factors that motivate the educated general public to engage in digital activism activities on Instagram, as well as their discretion regarding the presence of misinformation, malinformation, and disinformation. Through a semi-structured interview process, it was found that the factors motivating individuals to participate in digital activism include the desire to increase public awareness or vigilance, as well as to educate the public about social issues considered important. In responding to non-credible information, the criticality and willingness of individuals to conduct research on social issues they wish to address play a crucial role.
This panel will present and discuss the issues surrounding deception, misinformation, and disinformation using a social informatics perspective. The panel is sponsored by ASIST SIG‐SI.
Abstract In this article, we examine how journalists try to uphold ideals of objectivity, clarity, and epistemic authority when using four overlapping terms: fake news, junk news, misinformation, and disinformation. Drawing on 16 qualitative interviews with journalists in Denmark, our study finds that journalists struggle to convert the ideals of clarity and objectivity into a coherent conceptual practice. Across interviews, journalists disagree on which concepts to use and how to define them, accusing academics of producing too technical definitions, politicians of diluting meaning, and journalistic peers of being insufficiently objective. Drawing on insights from journalism scholarship and rhetorical argumentation theory, we highlight how such disagreements reveal a fundamental tension in journalistic claims to epistemic authority, causing a continuous search for unambiguous terms, which in turn produces the very ambiguity that journalists seek to avoid.
PurposeThis paper aims to enhance the understanding of the distinct origins, mechanisms, growth paths and societal impacts of misinformation and disinformation through the theoretical lens of Niklas Luhmann’s social systems theory, particularly focusing on structural coupling and penetration.Design/methodology/approachThis paper is based on a conceptual study that investigates the phenomena of mis-/disinformation based on reviews of the literature on social systems theory, particularly focusing on structural coupling and penetration.FindingsThis theoretical analysis has led to the postulations that mis-/disinformation would cause social conflicts through divergent routes and that they do not necessarily have negative consequences in society. That is, conflicts or communication of contradictions serve for the reproduction and change in social systems and, furthermore, serve society as an immune mechanism. We speculate that similarities in the manifestation of mis-/disinformation could stem from the influence of amplifiers, such as moral intervention. Nevertheless, we posit that disinformation stemming from intentional penetration is more likely to cause societal dysfunction than misinformation, leading to conflict overload, polarized information ecosystems and potential system failures.Originality/valueIt provides a broader theoretical perspective for a better understanding of the roots and mechanisms of mis-/disinformation and their social consequences. It also engages with unresolved debates over structural couplings and penetration, showing how distinguishing these concepts enhance analytical clarity and explanatory power.
The popular assumption that mis- and disinformation are distinguishable from true information based on easy-to-identify content features is challenged in an online context where multiple claims of truthfulness compete for legitimacy. When conventional and alternative narratives both rely on seemingly objective and fact-based truth claims, it is difficult for citizens to separate false from true information. In this setting, we rely on an inductive qualitative analysis of social media and alternative media platforms to explore how mis- and disinformation refer to expertise and objectivity. Our main findings suggest that expertise and objectivity in mis- and disinformation can be legitimized by (1) quoting or involving message-congruent alternative experts; (2) selectively decontextualizing or quoting established experts; (3) contrasting ‘honest’ alternative experts/critical citizens to ‘dishonest’ established experts; (4) emphasizing people-centric expertise, common sense, and critical thinking as foundations of truth-telling; and (5) referring to visual information and lived experiences as direct reflections of reality. The typology aims to inform empirical research on the detection of mis- and disinformation and can be applied in the design of interventions to raise awareness about how false information signals legitimacy.
No abstract available
In contemporary society, the increased reliance on social media as a vital news source has facilitated the spread of disinformation that has potential polarising effects. Disinformation, false information deliberately crafted to deceive recipients, has escalated to the extent that it is now acknowledged as a significant cybersecurity concern. To proactively tackle this issue, and minimise the risk of negative outcomes associated with disinformation, this research presents a typology of disinformation intentionality and impact (DII) to understand the intentionality and impact of disinformation threats. The typology draws upon information manipulation theory and risk management principles to evaluate the potential impact of disinformation campaigns with respect to their virality and polarising impact. The intentionality of disinformation spread is related to its believability among susceptible consumers, who are likely to propagate the disinformation to others if they assess it to be believable. Based on the dimensions of intentionality and impact, the DII typology can be used to categorise disinformation threats and identify strategies to mitigate its risk. To illustrate its utility for evaluating the risk posted by disinformation campaigns, the DII typology is applied to a case study. We propose risk mitigation strategies as well as recommendations for addressing disinformation campaigns spread through social media platforms.
No abstract available
In the digital era, social media has become a vital platform for information dissemination. However, the prevalence of misinformation has profoundly impacted societal cognition and public opinion. Misinformation is defined as intentionally distributed content that is inaccurate or misleading, taking forms such as rumors, fake news, and pseudoscientific claims. The features of social media, including user-generated content, dynamic network structures, and complex recommendation algorithms, have fueled the rapid spread of misinformation. This study explores the classification and characteristics of misinformation, as well as its propagation mechanisms on social platforms, aiming to reveal how misinformation influences public perception. The research also proposes strategies to mitigate the adverse effects of misinformation, thereby maintaining a healthy information environment.
Introduction. In the modern information environment, characterized by the rapid dissemination of news through online media and social networks, information manipulation has become a common phenomenon. During crisis situations, such as military conflicts or epidemics, the amount of misinformation, fake news, and propaganda increases, negatively impacting public opinion and trust in the media. This research aims to study various manipulations in online media and social networks, their mechanisms of influence, and their consequences for society. Relevance and Purpose. The relevance of the topic is determined by the need to understand the mechanisms of manipulation used in media to shape public opinion. The purpose of the article is to characterize the features of concepts related to manipulation and misinformation in modern media, explain the mechanisms of influence, and the consequences of these phenomena. Methodology. The study employs several methods: descriptive, classification, and content analysis. An analysis of Ukrainian media content for 2024 was conducted regarding publications of a manipulative nature. Additionally, a survey of 250 respondents was carried out to identify their perceptions of unreliable information. Results. The results indicate that manipulation is a targeted psychological influence on the audience’s consciousness. The main types of manipulative information include disinformation, malinformation, misinformation, fake news, propaganda, IPSO (information-psychological operations), and “zhyza” (paid-for news). Disinformation refers to deliberately created false information, while malinformation contains true facts presented in a misleading context. Propaganda is a broader concept that includes both true and distorted information aimed at achieving specific goals. The article’s conclusions emphasize the necessity of enhancing media literacy among the population and the importance of critical thinking in perceiving information from online sources. The study also revealed that manipulative materials often employ emotional influence and dis¬tortion of facts to shape certain views within society. Important aspects also include the role of memes as a manipulation tool among youth.
Purpose During the postpandemic era, owing to the widespread integration of technology, a greater abundance of information is circulating among young consumers compared to any previous period. Consequently, there exists a possibility that the disseminated information may not be accurate and ultimately prove to be fake. The purpose of this study is to conceptualize fake news, the definition and drivers of fake news from the perspective of young consumers in the postpandemic period. Design/methodology/approach A qualitative study was undertaken in the current study. A total of 30 interviews were conducted utilizing semistructured questionnaires. The interviews were audio recorded and subsequently transcribed. The data was analyzed using the Gioia methodology. Findings The study proposes a definition of fake news from the perspective of young consumers. Further, drawing on attribution theory, the three categories of reasons for sharing fake news were delineated: content related, source related and user related. Practical implications Drawing on the findings of the study, policymakers and other stakeholders working on the issues of fake news can acquaint themselves with the underlying reasons. Furthermore, they can devise policies to prevent the sharing of fake news. Social implications It is important for practitioners and society to understand the reasons behind the sharing of fake news among young consumers to combat the spread. Originality/value The present study will contribute to the literature by understanding the perspective of young consumers who intentionally or unintentionally share fake news. Additionally, attribution theory is used in the context of fake news to understand the dissemination behavior.
Falsification and manipulation of information, using it for image, material or political gain, is a significant phenomenon of contemporary social communication, and no doubt, its scale and significance have made fake news the subject of numerous studies. The purpose of this article is to analyse the attitudes of adult Poles toward fake news, based on the results of a qualitative study conducted as part of the national Infostrateg programme. The study was designed to identify respondents' knowledge and attitudes about fake news, their awareness of the dangers of information manipulation and how they deal with disinformation. A semi-structured individual interview method was used, which made it possible to capture subtle aspects of the respondents' experiences. Data analysis was carried out according to a semi-inductive model, using open coding and comparative analysis. Sampling was based on the criterion of maximum variation, which made it possible to capture a variety of perspectives on fake news. The results indicate that fake news is perceived as an integral part of the modern infosphere, and its presence is widely accepted, although it evokes distrust and caution. Respondents consider them a tool of social disintegration, manipulation of worldviews and network marketing. They show negative emotions toward the phenomenon, while declaring high resistance to information manipulation. The meaning attributed to fake news is reduced to four coherent categories: FN as the creation of a falsified image of reality; as a tool of social disintegration; as a tool for changing or strengthening worldviews; and as a tool of network marketing.
The spread of misinformation during the COVID-19 pandemic raised widespread concerns about public health communication and media reliability. In this study, we focus on these issues as they manifested in Romanian-language media and employ Large Language Models (LLMs) to classify misinformation, with a particular focus on super-narratives—broad thematic categories that capture recurring patterns and ideological framings commonly found in pandemic-related fake news, such as anti-vaccination discourse, conspiracy theories, or geopolitical blame. While some of the categories reflect global trends, others are shaped by the Romanian cultural and political context. We introduce a novel dataset of fake news centered on COVID-19 misinformation in the Romanian geopolitical context, comprising both annotated and unannotated articles. We experimented with multiple LLMs using zero-shot, few-shot, supervised, and semi-supervised learning strategies, achieving the best results with an LLaMA 3.1 8B model and semi-supervised learning, which yielded an F1-score of 78.81%. Experimental evaluations compared this approach to traditional Machine Learning classifiers augmented with morphosyntactic features. Results show that semi-supervised learning substantially improved classification results in both binary and multi-class settings. Our findings highlight the effectiveness of semi-supervised adaptation in low-resource, domain-specific contexts, as well as the necessity of enabling real-time misinformation tracking and enhancing transparency through claim-level explainability and fact-based counterarguments.
In the current digitized world, digital media forms like blogs, online news media, social media, etc., have replaced conventional news transmission platforms like newspapers and magazines. This research aims to examine the format, structure, and themes of fake news on social media. This research paper employed a qualitative content analysis methodology. The study collected & analyzed data from fifty viral stories about fake news. The dataset only extends from January 2025 to June 2025. The goal of this study is to help individuals better understand fake news ecosystems by showing how stories are made and shared on social media. It also shows how important it is to teach people how to use technology responsibly, critically, and proactively in order to lessen the effects of fake information that is spreading in the public domain.
This paper presents an analysis on information disorder in social media platforms. The study employed methods such as Natural Language Processing, Topic Modeling, and Knowledge Graph building to gain new insights into the phenomenon of fake news and its impact on critical thinking and knowledge management. The analysis focused on four research questions: 1) the distribution of misinformation, disinformation, and malinformation across different platforms; 2) recurring themes in fake news and their visibility; 3) the role of artificial intelligence as an authoritative and/or spreader agent; and 4) strategies for combating information disorder. The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives. Strategies proposed for combating information disorder include improving digital literacy skills and promoting critical thinking among social media users.
This paper traces the conceptual evolution of “information disorder”, examining how the term has developed from earlier concerns about propaganda and media manipulation to a more complex understanding in the digital age. Reviewing key literature and policy debates, the study explores how disinformation, misinformation, and malinformation have been defined and distinguished across academic, governmental, and civil society contexts. The analysis highlights how evolving technological, political, and cultural forces have shaped the framing and governance of information disorder. Ultimately, the study argues for a historically grounded and multidisciplinary approach to understanding information disorder as a persistent and adaptive phenomenon.
This article offers an outline of the terms propaganda and public relations before addressing the contemporary phenomenon of fake news. We identify commonalities and differences between these three manipulative practices and show that, rather than being exceptions, they constitute regular techniques of governance in both liberal democracies and more authoritarian systems of rule. Developing a set of family resemblances, we then show that propaganda, PR, and fake news belong to the overarching phenomenon of information disorder and are mainly distinguished by their reliance upon different aesthetic conventions, dissemination technologies, and business models.
ABSTRACT This paper joins a growing effort within mis/disinformation research to better address the transnational spread of misinformation and, in particular, the impact of political mis/disinformation on historically marginalized and immigrant communities. While misinformation spreads across cultural, sociolinguistic, and geo-political contexts, it impacts communities differently according to preexisting power structures and information resources. Through focus groups with Vietnamese Americans across two generations and several geographic locations, we explore the complexities of misinformation within one such immigrant community. Findings highlight how a prevalence of intergenerational divides in political information seeking, lasting historical and political traumas of immigration, and language barriers underpin the saliency and impact of misinformation for Vietnamese Americans. Further, we explore how misinformation impacts political engagement, highlighting the consequences of misinformation at a familial and community-level. This research highlights the need for researchers of misinformation to better attend to the inequities of informational access and the vulnerabilities of already marginalized communities as targets of problematic information and information disorder.
The multiplicity of infospheres in a country, especially in the countries with a significant proportion of minorities, creates polarization and distrust towards state institutions. This article addresses the problem by exploring the Estonian Russian-speaking minority’s attitudes towards news media content regarding fake news and information disorder. The semi-structured interviews were conducted with Russian native speakers living in Estonia (N=29), using stimulus materials to induce reactions related to elements of trust in the materials. The results show that interviewers have diverse media preferences, a critical eye for the news, more trust Estonian Russian-language media, and are rather able to recognize fake news and information disorder. The study challenges the widespread understanding that the Estonian Russian-speaking minority lives in an isolated infosphere of Russia. I argue that more attention should be drawn to the information quality in the news aimed at this audience.
Attention-deficit/hyperactivity disorder is the most frequent mental disorder among school-age children. This condition has given rise to a large mediatic coverage, which contributed to the shaping of the lay public’s perceptions. We therefore conducted two studies on the way attention-deficit/hyperactivity disorder was portrayed in the TV programs and the lay-public press in France between 1995 and 2015, but the growing part played by the Internet required an additional study to analyze and compare the scientific material which is available to the French lay public depending on the source of information used. We studied the 50 first French websites dedicated to attention-deficit/hyperactivity as indexed by Google® search engine using a structured quantitative content analysis for the web. We illustrate our results with excerpts derived from the websites. The conceptions of attention-deficit/hyperactivity disorder available on the Internet are essentially biomedical and comprise an important level of scientific distortion. Findings concerning other mass media such as television programs and the press also demonstrate massive and systematic distortions caused by the role of experts and the pharmaceutical industry. Furthermore, the most consulted media present the highest level of scientific distortions.
No abstract available
People are increasingly exposed to conflicting health information and must navigate this information to make numerous decisions, such as which foods to consume, a process many find difficult. Although some consumers attribute these disagreements to aspects related to uncertainty and complexity of research, many use a narrower set of credibility-based explanations. Experts’ views on disagreements are underinvestigated and lack explicit identification and classification of the differences in causes for disagreement. Consequently, there is a gap in existing literature to understand the range of reasons for these contradictions. Combining the findings from a literature study and expert interviews, a taxonomy of disagreements was developed. It identifies 10 types of disagreement classified under three dimensions: informant-, information- and uncertainty-related causes for disagreement. The taxonomy may assist with adoption of more effective strategies to deal with conflicting information and contributes to research and practice of science communication in the context of disagreement.
Devising a framework for assessing the subjectivity and objectivity of information taxonomy projects
No abstract available
ABSTRACT The main purpose of this research was to identify misinformation and other related online falsehoods i.e. dis-malinformation, fake news, rumor spreading in social media in Bangladesh associated with their threat and impact on human lives. Using a qualitative approach, the author randomly selected around 26 cases that happened in Bangladesh over the last 15 years. Both primary and secondary information resources (online and print newspapers, news archives, websites, blogs, TV channels, various social networking tools like Facebook, Twitter, etc.; a series of conversations with journalists) were considered to collect the cases that were spreading as misinformation. The cases were then categorized into various aspects such as political, health, religious, social, crime, and entertainment. A number of research papers were browsed in Google, ResearchGate, and Google Scholar, and the newspaper was also scanned to find out the associated effects of each case on our society. Findings showed in Bangladesh, the adverse ramifications of such misinformation were in religious and political aspects resulting social insecurity to loss of life also. Moreover, health misinformation has led people to decline health measures and practice unproven treatments which would be a great threat to public health.
Social media has become a popular means for people to consume and share the news. At the same time, however, it has also enabled the wide dissemination of fake news, that is, news with intentionally false information, causing significant negative effects on society. To mitigate this problem, the research of fake news detection has recently received a lot of attention. Despite several existing computational solutions on the detection of fake news, the lack of comprehensive and community-driven fake news data sets has become one of major roadblocks. Not only existing data sets are scarce, they do not contain a myriad of features often required in the study such as news content, social context, and spatiotemporal information. Therefore, in this article, to facilitate fake news-related research, we present a fake news data repository FakeNewsNet, which contains two comprehensive data sets with diverse features in news content, social context, and spatiotemporal information. We present a comprehensive description of the FakeNewsNet, demonstrate an exploratory analysis of two data sets from different perspectives, and discuss the benefits of the FakeNewsNet for potential applications on fake news study on social media.
Currently, the issue of the spread of fakes in information flows is more acute than ever – false information that is passed off as truth in order to obtain benefits, attract traffic, destructively influence the public consciousness, etc. In this regard, the relevance of this article is explained, on the one hand, by the spread of fakes in the media space, and on the other hand, by the insufficient theoretical development of this topic and only emerging interest in it. The article analyzes the essence of fakes, their goals and features, types of fake news, technologies that are used to create fake content, and also examines the influence of fakes on public consciousness. The article provides data from a survey of 586 respondents and a statistical analysis of the results. As a result, a conclusion is made about the need for further thorough and in-depth study of this topic in order to reduce the negative impact of fakes on the social and political stability of society in such a difficult time for the country. Nowadays, information is one of the most powerful tools for influencing people's opinions, beliefs and ideas, manipulating their consciousness. Modern people live in an information flow, the source of which is the media, social networks, the Internet, news and, of course, fakes. News fakes are not an innovation of the 21st century, they have existed since immemorial times, but it is in recent years, with the development and spread of information technology and artificial intelligence, that they have acquired truly global proportions. The situation is complicated by the fact of the spread and active use of modern information technologies, in particular artificial intelligence, which allows creating fakes so real that they are almost impossible to distinguish from the truth.
Social media is becoming increasingly popular for news consumption due to its easy access, fast dissemination, and low cost. However, social media also enables the wide propagation of “fake news,” i.e., news with intentionally false information. Fake news on social media can have significant negative societal effects. Identifying and mitigating fake news also presents unique challenges. To tackle these challenges, many existing research efforts exploit various features of the data, including network features. In essence, a news dissemination ecosystem involves three dimensions on social media, i.e., a content dimension, a social dimension, and a temporal dimension. In this chapter, we will review network properties for studying fake news, introduce popular network types, and propose how these networks can be used to detect and mitigate fake news on social media.
During the COVID‐19 crisis, fake news, conspiracy theories, and backlash against specific groups emerged and were largely diffused via social media. This phenomenon has been described as an “infodemic,” and this study examined that the characteristics of infodemic on Twitter. Typological attributes of the infodemic Twitter network presented the features of “community clusters.” The frequently shard domains and URLs demonstrated coherent characteristics within the network. Top domains and URLs were trustworthy information sources, popular blogs, and public health research institutions. Interestingly, the most shard conversational content of the network was a COVID‐19 relevant incident occurred at a church in Korea based on misinformation and false belief.
The pervasiveness of health information in social media has led to a modern misinformation crisis, also known as a misinfodemic. Misinfodemics have upended public health activities as clearly evident during the COVID-19 pandemic. The objective of this study is to characterize social media content and information sources using theory-driven health behavior and psychology constructs to better understand the motifs of misinformation and their role in the dissemination of health (mis)information in Twitter posts. We analyzed 1,400 randomly selected tweets related to COVID-19 to ascertain four important variables, what is the tweet about (content), how is it structured (linguistic features), who is tweeting (source), and what is the reach of the tweet (dissemination). Results showed there was a significant difference between themes expressed, health beliefs manifested, and observed linguistic patterns in true and false information. Implications for informatics-driven digital health utilities, such as theory-informed knowledge models and context-aware risk communications, are discussed.
Social networks scaffold the diffusion of information on social media. Much attention has been given to the spread of true vs. false content on online social platforms, including the structural differences between their diffusion patterns. However, much less is known about how platform interventions on false content alter the engagement with and diffusion of such content. In this work, we estimate the causal effects of Community Notes, a novel fact-checking feature adopted by X (formerly Twitter) to solicit and vet crowd-sourced fact-checking notes for false content. We gather detailed time series data for 40,074 posts for which notes have been proposed and use synthetic control methods to estimate a range of counterfactual outcomes. We find that attaching fact-checking notes significantly reduces the engagement with and diffusion of false content. We estimate that, on average, the notes resulted in reductions of 45.7% in reposts, 43.5% in likes, 22.9% in replies, and 14.0% in views after being attached. Over the posts' entire lifespans, these reductions amount to 11.4% fewer reposts, 13.0% fewer likes, 7.3% fewer replies, and 5.7% fewer views on average. In reducing reposts, we observe that diffusion cascades for fact-checked content are less deep, but not less broad, than synthetic control estimates for non-fact-checked content with similar reach. This structural difference contrasts notably with differences between false vs. true content diffusion itself, where false information diffuses farther, but with structural patterns that are otherwise indistinguishable from those of true information, conditional on reach.
Detecting false information on social media is critical in mitigating its negative societal impacts. To reduce the propagation of false information, automated detection provide scalable, unbiased, and cost-effective methods. However, there are three potential research areas identified which once solved improve detection. First, current AI-based solutions often provide a uni-dimensional analysis on a complex, multi-dimensional issue, with solutions differing based on the features used. Furthermore, these methods do not account for the temporal and dynamic changes observed within the document's life cycle. Second, there has been little research on the detection of coordinated information campaigns and in understanding the intent of the actors and the campaign. Thirdly, there is a lack of consideration of cross-platform analysis, with existing datasets focusing on a single platform, such as X, and detection models designed for specific platform. This work aims to develop methods for effective detection of false information and its propagation. To this end, firstly we aim to propose the creation of an ensemble multi-faceted framework that leverages multiple aspects of false information. Secondly, we propose a method to identify actors and their intent when working in coordination to manipulate a narrative. Thirdly, we aim to analyse the impact of cross-platform interactions on the propagation of false information via the creation of a new dataset.
In the digital era, social networks serve as critical platforms for information dissemination but are also plagued by the spread of false information, which can undermine public trust and incite societal discord. This study examines the dynamics of false information dissemination on social networks, including its types, influential factors, and detection and management strategies. We explore various forms of false informationsuch as impersonation, misleading content, and AI-generated forgeriesand analyze the role of user interactions, network topology, and macro factors in the spread of misinformation. Detection methods are reviewed, highlighting advancements in technologies like deep learning, and management strategies are proposed, including user behavior regulation and dissemination path control. Challenges related to legal, ethical, and privacy issues are discussed, alongside the complexities of user behavior and future research directions. The findings underscore the need for comprehensive, adaptive approaches to safeguard the integrity of online information ecosystems.
With the wide application of generative artificial intelligence (AIGC) technology in the fields of text generation, image synthesis and voice forgery, the efficiency of content production has been significantly improved, but it also brings new social governance challenges such as the proliferation of false information and the intensification of public opinion manipulation. This paper focuses on the problem of false content generation in the context of AIGC, and analyses the types of risks, dissemination paths and public perception effects triggered by the problem in the public opinion arena, taking into account typical cases. It further explores the roles and limitations of the government and platforms in risk identification, information verification, algorithmic traceability and response mechanism, and constructs a triadic interaction model of "generation risk-public perception-governance mechanism". The study proposes to establish a multifaceted and coordinated algorithmic governance system, including the construction of a technical early warning mechanism, the implementation of AI content traceability and labelling system, the enhancement of public digital literacy, and the compaction of platform responsibility. The study shows that the risk caused by AIGC is highly realistic and rapidly spreading, and the traditional means of public opinion governance urgently needs to be transformed to an algorithm-centred systematic synergistic mechanism. This paper is of great theoretical significance and practical value for reconstructing the public trust mechanism and promoting the modernisation of national public opinion governance.
As of 2024, precise figures regarding the proportion of false information on Indian social media remain elusive. However, past data sheds light on the prevalence of misinformation. A January 2022 poll found that 22% of Indian social media users admitted to being deceived by fake news online. Another survey revealed that 45% encountered entirely fabricated stories in the Indian media, often with political or economic motives. Additionally, an Oxford University study noted that 54% of Indians relied on social media for truthful information. These figures, though estimations, underscore the significant presence of false information. Moreover, various definitions of “fake information” exist, encompassing dormant accounts to bot-driven spamming, highlighting the complexity of the issue. The dissemination of inaccurate information on social media platforms poses a substantial threat to online discourse integrity. This paper proposes a framework for detecting fake information utilizing common Python libraries like NumPy, Pandas, Matplotlib, NLTK, Joblib, and LDA alongside machine learning techniques. Preprocessing of textual data employs NLTK for natural language processing, followed by topic modeling with LDA to uncover latent themes. Machine learning algorithms integrated with NumPy and Pandas extract features and train models for post classification. Visualization tools like Matplotlib and Seaborn aid in data exploration and result assessment. This interdisciplinary approach demonstrates promising capabilities in identifying false information on social media platforms, contributing to ongoing efforts to combat online disinformation.
Alarmed by the oversimplifications related to the ‘fake news’ buzzword, researchers have started to unpack the concept, defining diverse types and forms of misleading news. Most of the existing works in the area consider crucial the intent of the content creator in order to differentiate among different types of problematic information. This article argues for a change of perspective that, by leveraging the conceptual framework of sociocybernetics, shifts from exclusive attention to creators of misleading information to a broader approach that focuses on propagators and, as a result, on the dynamics of the propagation processes. The analytical implications of this perspective are discussed at a micro level (criteria to judge the falsehood of news and to decide to spread it), at a meso level (four possible relations between individual judgements and decisions), and at a macro level (global circulation cascades). The authors apply this theoretical gaze to analyse ‘fake news’ stories that challenge existing models.
Building on seven fundamental theoretical approaches to media influence and people’s behaviour toward information developed between the 1950s and 1980s—Two-Step Flow of Communication, Homophily, Encoding/Decoding, Uses and Gratifications, Arts of Poaching, Agenda-Setting, and Spiral of Silence—this paper puts into perspective the circulation of online content and the underlying mechanisms of disinformation in digital spheres. Our approach seeks to highlight the necessity, for theorists as well as practitioners, to consider fact-checking not only as a “false vs. true” verification mechanism, but as a communication process in and of itself.
As the commercial circulation of health content on social media continues to intensify, large volumes of fitness, nutrition, and wellness information lacking scientific grounding are repeatedly pushed to users, heightening the likelihood that psychologically susceptible individuals internalize distorted beliefs and engage in harmful practices. This study examines the mechanism through which exposure to such content influences psychological vulnerability, strengthens false health beliefs, shapes risky behavioral choices, and ultimately affects perceived health status. Using 482 valid survey responses, we conducted confirmatory factor analysis, structural equation modeling, bootstrap mediation tests, multigroup comparisons, and robustness checks to investigate these pathways. The findings show that exposure significantly increases psychological vulnerability, which further promotes endorsement of inaccurate beliefs and encourages risky health behaviors, leading to poorer health outcomes. All indirect effects were statistically supported, and the moderating influences of Government Support and avoidance tendencies revealed that individual and contextual factors can alter the strength of the mechanism. Robustness analyses demonstrated that the belief variable is indispensable, as removing it led to substantial declines in model fit, indicating that adverse outcomes arise not from isolated exposure but from the gradual reinforcement and internalization of misleading claims. These results clarify the psychological and behavioral processes through which misleading health information exerts its influence in digital environments and provide empirical grounding for regulatory strategies that seek to intervene in the formation and consolidation of erroneous health beliefs rather than relying solely on limiting content visibility.
Short video clips or reels have become the most convenient medium of sharing information on social media platforms. However, these can also be potentially misused for circulating content that is false, unverified or carries bias. Literature affirms that media consumed in any form, has the potential to form or alter mass opinions. This research sought to unravel the intricate dynamics between online content, religious bias and political perceptions. The study undertook an in-depth examination of the content within 50 reels circulating in social media, focusing on their intended effects on perception towards religious minorities and political mind-shift among Indian users. By adhering to ethical considerations and transparency in reporting, a rigorous content analysis methodology was employed. The findings are suggestive that visual elements in the reels are intended to provoke religious divide towards minorities, often with exaggeration of false information as true, and associating Indian political figures with religious ideologies, thereby having the tremendous potential to drastically shape public opinions. This study not only aligns with the current zeitgeist of digital communication but also underscores the broader implications of content dissemination on social media platforms.
Fake health news content, or health misinformation, on social networking sites (SNS) is a serious risk that has led to harmful consequences for individuals, communities, and broader society. The objective of this study is to understand how bogus health news information spreads via. social media and its impacts on public behaviour in Pakistan. The theoretical framework is based on a combination of several media theories, including Agenda-Setting, Framing, and Uses and Gratification (U&G), in conjunction with concepts like confirmation bias, filter bubbles, and echo chambers. The qualitative approach adopted encompasses in-depth semi-structured interviews from twenty individuals, comprising 10 professionals and 10 university students. The study findings indicate that emotional responses, confirmation bias, and algorithmic amplification all contribute to the rapid spread and acceptance of false information. It suggests that a multifaceted approach, combining reliable communication, digital literacy education, and community engagement, is urgently needed to foster critical thinking and resilience against the spread of health-related misinformation on social media platforms and sites. Keywords: Fake health news, social media, confirmation bias, filter bubbles, echo chambers, algorithmic amplification
The article examines the phenomenon of disinformation in the digital age and the role of fact-checking in combating the spread of false information. Particular attention is paid to the analysis of user content using the example of the Uzbek fact-checking website Faktchecker.uz. The main methods of information verification used on the platform, as well as the impact of disinformation on public opinion and digital literacy of users are examined. The author analyzes examples of exposed fakes and their distribution in the media space of Uzbekistan. The algorithms of fact-checkers, their interaction with users and the level of audience trust in verified information are considered. The article emphasizes the importance of media literacy and critical thinking in the context of information overload. The findings of the study highlight the need for institutional support for fact-checking initiatives and the development of effective strategies to counter disinformation in Uzbekistan.
Fake news is a global media security challenge. In China, the scale of this problem is growing with the development of social networks as an element of information resources – fake materials are actively distributed in parallel with the introduction and popularization of digital technologies in the public Internet infrastructure. The competitiveness of the People's Republic of China in the international market and its political power force hostile organizations and countries to destabilize the harmonious development of the country through such information warfare tools as fake news and psychological pressure on the audience by exaggerating the danger of certain incidents or suppressing their positive correlates, as well as hoaxes of events. The result of the publication of false data on healthcare, politics, history, economics, international relations and other important areas of human activity is mass riots, popular unrest, sabotage and intimidation of users. The fake news phenomenon is complicated by the fact that artificial intelligence makes it possible to create and distribute fake content, including deepfakes, among thousands of users. Moreover, the material generated by the neural network disorients the audience – most people are unable to determine the reliability and relevance of the news. The results of this study suggest that the specificity of fake news representation lies in the generation of textual and audiovisual false material through artificial intelligence, the product of which is almost similar in all its characteristics and parameters to a reliable source (the website of a reputable professional publishing house) and a person (a renowned journalist or expert) covering media content. The article used such methods as theoretical analysis of scientific literature on the identified issues, descriptive method, data aggregation (tabular method), component analysis, as well as systematization and interpretation of the obtained material.
Abstract Who should decide what passes for disinformation in a liberal democracy? During the COVID-19 pandemic, a committee set up by the Dutch Ministry of Health was actively blocking disinformation. The committee comprised civil servants, communication experts, public health experts, and representatives of commercial online platforms such as Facebook, Twitter, and LinkedIn. To a large extent, vaccine hesitancy was attributed to disinformation, defined as misinformation (or data misinterpreted) with harmful intent. In this study, the question is answered by reflecting on what is needed for us to honor public reason: reasonableness, the willingness to engage in public discourse properly, and trust in the institutions of liberal democracy.
The proliferation of misinformation presents a significant challenge in today’s information landscape, impacting various aspects of society. While misinformation is often confused with terms like disinformation and fake news, it is crucial to distinguish that misinformation involves, in mostcases, inaccurate information without the intent to cause harm. In some instances, individuals unwittingly share misinformation, driven by a desire to assist others without thorough research. However, there are also situations where misinformation involves negligence, or even intentional manipulation, with the aim of shaping the opinions and decisions of the target audience. Another key factor contributing to misinformation is its alignment with individual beliefs and emotions. This alignment magnifies the impact and influence of misinformation, as people tend to seek information that reinforces their existing beliefs. As a starting point, some 56 papers containing ‘misinformation detection’ in the title, abstract, or keywords, marked as “articles”, written in English, published between 2016 and 2022, were extracted from the Web of Science platform and further analyzed using Biblioshiny. This bibliometric study aims to offer a comprehensive perspective on the field of misinformation detection by examining its evolution and identifying emerging trends, influential authors, collaborative networks, highly cited articles, key terms, institutional affiliations, themes, and other relevant factors. Additionally, the study reviews the most cited papers and provides an overview of all selected papers in the dataset, shedding light on methods employed to counter misinformation and the primary research areas where misinformation detection has been explored, including sources such as online social networks, communities, and news platforms. Recent events related to health issues stemming from the COVID-19 pandemic have heightened interest within the research community regarding misinformation detection, a statistic which is also supported by the fact that half of the papers included in top 10 papers based on number of citations have addressed this subject. The insights derived from this analysis contribute valuable knowledge to address the issue, enhancing our understanding of the field’s dynamics and aiding in the development of effective strategies to detect and mitigate the impact of misinformation. The results spotlight that IEEE Access occupies the first position in the current analysis based on the number of published papers, the King Saud University is listed as the top contributor for the misinformation detection, while in terms of countries, the top-5 list based on the highest contribution to this area is made by the USA, India, China, Spain, and the UK. Moreover, the study supports the promotion of verified and reliable sources of data, fostering a more informed and trustworthy information environment.
In recent years, government agencies, information institutions, educators and researchers have paid increasing attention to issues of misinformation, disinformation and conspiracy theorizing. This has prompted a seemingly endless supply of guides, frameworks and approaches to ‘combating’ the problem. In studies of mis- and disinformation, a constellation of analogous concepts are defined in multiple ways across multidisciplinary literatures and institutional contexts. Misinformation, disinformation and conspiracy theory are often conflated, lacking specific, portable definitions across fields of study. Linguistic metaphors are often leveraged in place of this definitional work. The larger conceptual metaphors that they connote contain normative assumptions that often impose values and moral imperatives, imply deficiencies, assume intent, and foreground individual agency or lack thereof. Metaphors are as restrictive as they are illuminating; once used, a metaphor also applies constraints to the way in which a phenomenon can be understood. Metaphors not only shape the ways in which science is communicated to the public, but also the kinds of questions that are asked, the theories and methods used, and the parameters of the research design. By analyzing instances of linguistic metaphor, this exploratory study identifies and develops two conceptual metaphors that are frequently evoked to discuss mis- and disinformation: embodied health metaphors and environmental health metaphors. The former includes linguistic metaphors like viral/virality, infodemic, infobesity, information hygiene, information dysfunction, and information pathology. The latter includes linguistic metaphors like information pollution, infollution, and digital wildfires. Uncritically invoking such metaphors adopts tacit arguments deriving from the original field of study (e.g., public health’s tendency to equate individual embodied health with virtue), or the image of the metaphor itself (digital wildfires implies quick spread and immediate danger), or both. Widespread and uncritical use of such metaphors, we argue, rewards speed and epistemic homogeneity in mis- and disinformation research – ultimately discouraging in-depth inquiry.
There is a constantly growing rate of information being shared online as a result of new technologies, social media, and the way the public interacts with these tools. According to the Pew Research Center, one consequence of the increased sharing of online information is the propagation and spread of misinformation, ranging from COVID-19 to politics to many other aspects of life and work (Mitchell et al., 2020). The rise of social media in disseminating information has led to curated content thatmay not have the same journalistic standard as traditionalmedia and therefore can spread inaccurate, false, malicious information, or propaganda. According to UNESCO (2018), misinformation is information that ismisleading but not createdwithmal intent. This differs from disinformation, which is false information created to purposefully create harm (UNESCO, 2018). Misinformation regarding COVID-19 has been prevalent and appeared in various forms of media (Brennen et al., 2020). The majority of misinformation about COVID19 appeared on social media (88%), followed by television (9%), news outlets (8%), and other websites (7%). Often times, facts were misconstrued (59%) instead of fabricated (38%). Exposure to COVID-19– related misinformation has reduced people’s willingness to seek additional (often counter) information and ability to process it (Kim et al., 2020). Only about 30% of Americans have expressed confidence in their ability to check the accuracy of information regarding COVID19 (Gottfried, 2020).Misinformation regarding the U.S. presidential election was similarly widespread. Misinformation may have consequences beyond COVID-19 and elections, affecting areas such as HIV. This commentary will focus on the impact of misinformation on the use of HIV services, including misinformation related to the safety, efficacy, and use of preexposure prophylaxis (PrEP); use of supplements; and HIV-related stigma. Those at risk for or living with HIV may be susceptible to misinformation for a variety of reasons. For instance, one study examined the perception of individuals living with HIV with limited income resources and history of substance use about their health seeking behavior online (Nokes et al., 2018). Findings showed that study participants had low electronic health literacy and, although they were interested in seeking information online, low confidence in their ability to distinguish a credible source, with some preferring to speak with health providers instead. Misinformation and stigma continue to marginalize vulnerable populations, such as African American men whohave sexwithmen (further negatively affecting their health outcomes; Nokes et al., 2018). For example, among African American men who have sex with men, perceived stigma andmedical mistrust aboutmedication side effects were barriers to PrEP uptake (Cahill et al., 2017). A survey of millennials and Generation Z found that stigma surrounding HIV affected the emotional, mental, and sexual health among those living with HIV (Salman et al., 2016). Misinformation about treatment adherence was also high among this group, with approximately one third of participants believing that they could stop taking medications if they felt better.
Synthetic realities are digital creations or augmentations that are contextually generated through the use of Artificial Intelligence (AI) methods, leveraging extensive amounts of data to construct new narratives or realities, regardless of the intent to deceive. In this paper, we delve into the concept of synthetic realities and their implications for Digital Forensics and society at large within the rapidly advancing field of AI. We highlight the crucial need for the development of forensic techniques capable of identifying harmful synthetic creations and distinguishing them from reality. This is especially important in scenarios involving the creation and dissemination of fake news, disinformation, and misinformation. Our focus extends to various forms of media, such as images, videos, audio, and text, as we examine how synthetic realities are crafted and explore approaches to detecting these malicious creations. Additionally, we shed light on the key research challenges that lie ahead in this area. This study is of paramount importance due to the rapid progress of AI generative techniques and their impact on the fundamental principles of Forensic Science.
This research examines the typology of rumors and fact-checking mechanisms in Chinese social media, focusing on the WeChat platform. The study analyzes 300 cases of disinformation extracted from the "Rumor Refutation Assistant" application in WeChat between 2023 and 2025 using Python-based tools.The author investigates the structural and content characteristics of rumors, their thematic classification across various categories (healthcare, public safety, and others), and both institutional and user-driven verification strategies. Special attention is given to the relationship between rumor types and fact-checking mechanisms' effectiveness within China's . The methodology includes content analysis for fakes typology, text mining techniques (TF-IDF, LDA), and social network analysis to examine information dissemination patterns. Findings reveal significant patterns in fakes distribution, where algorithmic and institutional factors substantially influence information perception. Healthcare-related messages (39.67%), technology information (23.00%), and public safety content (21.33%) dominate the fakes landscape. The author's contribution lies in analyzing information verification mechanisms within Chinese social media and identifying correlations between fake typologies and refutation strategies' effectiveness. The research novelty stems from examining rumor typology and fact-checking in the Chinese context, emphasizing WeChat's role in information dissemination. The study demonstrates that mitigating disinformation requires AI integration, active user participation in fact-checking, and effective legal regulation of the information space.
In light of the intense information disorder that has ensued since the outbreak of the COVID-19 pandemic, the aim of this study is to analyze the similarities and differences between the disinformation circulating in three countries, based on the posts of their pioneering fact-checking organizations: Agência Lupa (Brazil), Newtral (Spain), and Jornal Polígrafo(Portugal). A quantitative and qualitative content analysis (Bardin, 2011) was run on the fact checks (n = 87) performed by the three organizations in March 2021, 12 months after the pandemic had been declared by the World Health Organization, using the analytical categories “classification”, “medium”, “format”, “source”, and “topic”. The disinformation identified in the three countries shared three similarities, namely, a predominance of false content, the primary use of text formats, and the dissemination of disinformation on social media platforms. As to the sources cited and subject matter, differences were found in the strategies employed to validate the disinformation and in the topics covered. It can be concluded that while the pandemic was a global phenomenon, the disinformation circulating about it was influenced by the political, social, and cultural particularities of each country.
Background The word “infodemic” refers to the deluge of false information about an event, and it is a global challenge for today’s society. The sheer volume of misinformation circulating during the COVID-19 pandemic has been harmful to people around the world. Therefore, it is important to study different aspects of misinformation related to the pandemic. Objective This paper aimed to identify the main subthemes related to COVID-19 misinformation on various platforms, from traditional outlets to social media. This paper aimed to place these subthemes into categories, track the changes, and explore patterns in prevalence, over time, across different platforms and contexts. Methods From a theoretical perspective, this research was rooted in framing theory; it also employed thematic analysis to identify the main themes and subthemes related to COVID-19 misinformation. The data were collected from 8 fact-checking websites that formed a sample of 127 pieces of false COVID-19 news published from January 1, 2020 to March 30, 2020. Results The findings revealed 4 main themes (attribution, impact, protection and solutions, and politics) and 19 unique subthemes within those themes related to COVID-19 misinformation. Governmental and political organizations (institutional level) and administrators and politicians (individual level) were the 2 most frequent subthemes, followed by origination and source, home remedies, fake statistics, treatments, drugs, and pseudoscience, among others. Results indicate that the prevalence of misinformation subthemes had altered over time between January 2020 and March 2020. For instance, false stories about the origin and source of the virus were frequent initially (January). Misinformation regarding home remedies became a prominent subtheme in the middle (February), while false information related to government organizations and politicians became popular later (March). Although conspiracy theory web pages and social media outlets were the primary sources of misinformation, surprisingly, results revealed trusted platforms such as official government outlets and news organizations were also avenues for creating COVID-19 misinformation. Conclusions The identified themes in this study reflect some of the information attitudes and behaviors, such as denial, uncertainty, consequences, and solution-seeking, that provided rich information grounds to create different types of misinformation during the COVID-19 pandemic. Some themes also indicate that the application of effective communication strategies and the creation of timely content were used to persuade human minds with false stories in different phases of the crisis. The findings of this study can be beneficial for communication officers, information professionals, and policy makers to combat misinformation in future global health crises or related events.
No abstract available
The public health crisis created by COVID-19 represents a challenge for journalists and the media. Specialised information in healthcare and science has turned into a need to deal with the current situation as well as the demand for information by society. In this context of increased uncertainty, the circulation of fake news on social networks and messaging applications has proliferated, producing what has been known as ‘infodemic’. This paper is focused on the fact-checking of journalistic content using a combined methodology: content analysis of information denied by the main Spanish fact-checking platforms (Maldita and Newtral) and an in-depth questionnaire to these stakeholders. The results confirm the quantitative and qualitative evolution of disinformation. Quantitatively, more fact-checking is performed during the state of alarm. Qualitatively, hoaxes increase in complexity as the pandemic evolves, in such a way that disinformation engineering takes place, and it is expected to continue until the development of a vaccine.
The thematic articles in this issue collectively examine the growing challenge of AI-generated mis-information — its creation, dissemination via social media, and impact on political discourse and public trust — alongside the struggle of fact-checking and detection methods. This rapidly evolving landscape, where AI-generated misinformation blurs truth and falsehood, erodes traditional sources of epistemic authority (Shin et al., 2025), making trustworthy information dif fi cult to discern and underscoring the urgency of effective countermeasures. For instance, Cazzamatta and Sarisakalo ğ lu (2025) provide empirical data on AI-generated misinformation trends across different countries, noting country-speci fi c variations in topics and intentionality
In the contemporary media landscape, marked by the rapid advancement of digital technologies, the issue of the credibility of media content and platforms has emerged as one of the principal societal concerns. This paper examines the phenomenon of disinformation, fake news, and related concepts such as misinformation and malinformation, which constitute both a major challenge and a tangible as well as potential threat to media users. Particular emphasis is placed on fact-checking projects and platforms, their societal roles, and prominent examples from Croatia and abroad. The objective of the study was to investigate public perceptions of the credibility of fact-checking platforms and to underscore the importance of media literacy, critical evaluation of information, transparency, and the reliability of digital sources. The empirical segment of the paper is grounded in a survey conducted among citizens of the Republic of Croatia, encompassing diverse age and educational cohorts. The research sought to determine the extent to which citizens, as media consumers, recognize and utilize fact-checking platforms, the degree of trust they place in them, and the influence of various factors on their attitudes. The results indicate a moderate level of trust and limited recognition of fact-checking platforms, highlighting the pivotal role of media literacy in enhancing societal resilience to fake news and propaganda.
Democratic societies inherently depend on an informed citizenry. By shaping citizens’ voting behavior, fostering political cynicism, and reducing trust in institutions, misinformation can pose significant challenges to individuals and societies. Against this backdrop, fact-checking initiatives aimed at verifying the accuracy of publicly disseminated (mis)information have flourished worldwide. However, existing research is disproportionately oriented toward the Global North, with a focus on the United States and the most influential organizations. Equally scarce are comparative studies. To address these shortcomings, this study introduces a context-sensitive framework for analyzing fact-checking cultures and illustrates its application in a cross-national comparative design by contrasting two countries from the Global South and North: Brazil and Germany. Using a mixed-methods design, we integrate computational, qualitative, and quantitative content analysis of 11 fact-checking organizations and 13,498 fact-checking articles over 11 years (2013–2023), alongside qualitative semistructured interviews with fact-checkers (N = 10). Our findings reveal several areas of divergence and convergence, suggesting that fact-checking cultures transcend organizational and national boundaries.
Natural disasters linked to contexts of unpredictability and surprise generate a climate of uncertainty in the population, resulting in an exponential increase in disinformation. These are crisis situations that cause the management of public and governmental institutions to be questioned, diminish citizens’ trust in the media, and reinforce anonymity in social networks. New digital algorithms create a scenario plagued by fake news and levels of viralization of rumors never before contemplated. Our objective is to analyze the verification capacity of fact-checking agencies at X at times of information disorder, such as the Turkey–Syria earthquakes in 2023. We apply a mixed methodology of comparative content analysis to government, news agency, and IFCN accounts, generating a general sample (n = 46,747) that is then subjected to thematic categorization to create a specific sample (n = 564). The results indicate a low commitment to fact-checking on the part of official bodies and news agencies, as opposed to fact-checking agencies’ accurate handling of the facts. The lack of debate and engagement generated by digital audiences in the face of the discursive intentionality of disinformation is significant.
ABSTRACT This study is based on a content analysis of 238 forwarded messages sent to a public fact-checking group on WhatsApp in Singapore during the first six months of the COVID-19 pandemic to understand what types of information people would submit for fact-checking, allowing insights into possible motivations behind the use of fact-checking services. Focusing on content characteristics, we examined the range of topics, valence, and facticity of the messages forwarded to the WhatsApp group to be fact-checked. The most common topic was public policy and action; most of the messages focused on negative aspects; and nearly half of the messages were either partly or entirely inaccurate. Comparing the distribution of messages across a six-month period, we found that content characteristics varied over time. As the situation worsened in Singapore, with number of cases increasing and more regulations implemented by the government, the messages shared to be authenticated focused more on public policy, became more negative, and contained more inaccuracies. These findings indicate that the types of information people seek to authenticate are those that have utility; are important and consequential; are likely to inform their actions and decisions; and can aid them in sense-making.
This study explores how fact-checkers understand information disorder in Ibero-America, in particular the COVID-19 disinformation. We conducted a quantitative content analysis of the LatamChequea Coronavirus alliance database and in-depth interviews with journalists from the network. Evidence found that one of the most prevalent disinformation topics was the government’s restrictive measures, threatening to jeopardize the effectiveness of public health campaigns. This, added to disinformation that eroded the trust in the institutions and the press, and the opacity of governments constituted a political crisis in Ibero-America. Under this scenario, fact-checkers created relevant journalistic collaborations and strategies to fight disinformation in the region.
Abstract Epistemologies of journalism differ across genres, and fact-checking, as an independent operation or feature within existing news media organizations, can be considered a genre of journalism with its own epistemology. This paper explores the epistemology of fact-checking as expressed by fact-checkers from 40 fact-checking organizations serving more than 50 countries on six continents. Fact-checkers operate in various political, social, economic, and informational contexts, and yet reveal isomorphic norms, practices, and structures, particularly in the form of knowledge, production of knowledge, and defense of their knowledge. What emerged in interviews with fact-checkers is a shared belief in the ability to determine the objective truth of claims, which is validated by evidence and a transparent process of reproducibility or modeling the fact-checking process. This process is also seen as a way to convince the reader of the accuracy and trustworthiness of fact-checks. Fact-checkers define their role and work as a public service, and frequently offer media literacy and fact-checking training to the public and journalists to instill a culture of factuality. Overall, the cross-national findings suggest that as fact-checking is becoming increasingly taken-for-granted or institutionalized, fact-checkers share a common epistemology, promoting confidence in factually verifiable truth.
ABSTRACT Misinformation is a complex and global problem of social and technical dimensions. It is a problem that is exacerbated and sought to be solved by using diverse technologies. It is also a problem that flourishes on platforms and can lead to partnerships with platform companies. These sociotechnical dimensions of misinformation as a problem involve different actors. Some actors create or contribute to the problem, while others perceive it as their problem to solve and work to address it. Identifying the problem of misinformation is at the heart of the issue of problem-solving in fact-checking, as different actors have interests in how problems are discursively presented. This article draws on an international interview study conducted throughout 2020–2022 with 46 fact-checking actors (21 fact-checkers, 14 journalists, and 11 newsroom managers). This article analyzes how these actors reflect on “misinformation problems,” and how these problems become “fact-checking problems” for the actors to work with and solve. Ultimately, the article argues that fact-checking must be approached as a sociotechnical and problem-solving-oriented practice. Doing so highlights specific obstacles in information distribution and platform affordances.
During the onset of the COVID-19 pandemic, various officials flagged the critical threat of false information. In this study, we explore how three major social media platforms (Facebook, Twitter, and YouTube) responded to this “infodemic” during early stages of the pandemic via emergent fact-checking policies and practices, and consider what this means for ensuring a well-informed public. We accomplish this through a thematic analysis of documents published by the three platforms that address fact-checking, particularly those that focus on COVID-19. In addition to examining what the platforms said they did, we also examined what the platforms actually did in practice via a retrospective case study drawing on secondary data about the viral conspiracy video, Plandemic. We demonstrate that the platforms focused their energies primarily on the visibility of COVID-19 mis/disinformation on their sites via (often vaguely described) policies and practices rife with subjectivity. Moreover, the platforms communicated the expectation that users should ultimately be the ones to hash out what they believe is true. We argue that this approach does not necessarily serve the goal of ensuring a well-informed public, as has been the goal of fact-checking historically, and does little to address the underlying conditions and structures that permit the circulation and amplification of false information online.
Like other disease outbreaks, the COVID-19 pandemic has led to the rapid generation and dissemination of misinformation and fake news. We investigated whether subscribers to a fact checking newsletter (n = 1397) were willing to share possible misinformation, and whether predictors of possible misinformation sharing are the same as for general samples. We also investigated predictors of willingness to have a COVID-19 vaccine and found that although vaccine acceptance was high on average, it decreased as a function of lower belief in science and higher conspiracy mentality. We found that 24% of participants had shared possible misinformation and that this was predicted by a lower belief in science. Like general samples, our participants were typically motivated to share possible misinformation due to interest in the information, or to seek a second opinion about claim veracity. However, even if information is shared in good faith and not for the purpose of deceiving or misleading others, the spread of misinformation is nevertheless highly problematic. Exposure to misinformation engenders faulty beliefs in others and undermines efforts to curtail the spread of COVID-19 by reducing adherence to social distancing measures and increasing vaccine hesitancy.
This paper investigates the dissemination, situated fact-checking processes, and social effects of COVID-19 related online and offline misinformation in rural Bangladeshi life. A six-month-long ethnographic study in three villages found villagers perceived a lack of knowledge and experience among local medical professionals and often fell for flashy promotions of unreliable and unconfirmed cures. Villagers built on their local beliefs and myths, religious faiths, and social justice sensibilities while fact-checking suspicious information. They often reported being misled by misinformation that caters to these values, and they further spread this information through conversations with friends and family. Based on our findings, we argue that CSCW and HCI researchers should study misinformation and situated fact-checking together as a communal practice to design appropriate wellbeing technologies and social media for given communities.
Since the World Health Organization (WHO, February 2, 2020) reported that the spread of coronavirus disease has been accompanied by a “massive infodemic,” the COVID-19 outbreak has become a national and international battleground of a struggle against misinformation. Fact-checking outlets around the world have been actively counteracting false and misleading information surrounding the pandemic. In this article, we conceptualize fact checkers in terms of the “interpretative power” that journalism holds in processes of political performances (Alexander in Soc Theory 22(4): 527–573, 2004, in: The performance of politics. Obama’s victory and the struggle for democratic power. Oxford University Press, Oxford/New York, 2010). Drawing on virus-related fact checks from Poynter’s International Fact-Checking Network (IFCN) database, we make two arguments. First, we argue that the new phenomenon of specialized “fact checking” might be considered as a further explicitly differentiated element of Alexander’s model of cultural performance, which fulfills a double duty: trying to contribute to further “de-fusion” (separating audiences from actors when the latter lack authenticity and credibility) on the one hand, and working to overcome it on the other. Second, we explain how new fact-checking practices have become a reflexive supplement to the news media of the civil sphere that might be able to help the civil sphere’s communicative institutions to defend truthfulness in a manner that contributes to democracy.
This exploratory study investigates how the global COVID-19 pandemic spotlighted fact-checking to combat misinformation and disinformation in Canadian journalism. Specifically, this work investigates how Canadian journalists and journal-ism educators may be approaching fact-checking (both ante hoc, or editorial, and post hoc) to respond to more forms of misinformation and disinformation. Through expert in-depth interviews ( n = 14) with Canadian journalism educators, reporters, and newsroom leaders, this analysis sketches an initial understanding of the place of fact-checking in Canadian journalism practice and pedagogy. This initial study offers five tentative findings from our expert interviews: (1) while the COVID-19 pandemic highlighted the need for more fact-checking, Canadian journalists and journalism educators believe the worldwide health crisis was not the sole trigger for an increased focus on fact-checking in Canadian journalism and journalism education; (2) over the last decade, Canadian journalism schools may have increased their focus on fact-checking and verification teaching; (3) while Canadian newsroom leaders want their journalists to have solid fact-checking and verification skills to combat concerns about information integrity, they are concerned about the skills new graduates bring to the job; (4) Canadian journal-ists and journalism educators believe ante hoc or editorial and post hoc fact-checking should play a more significant role in Canadian journalism; and (5) while there is concern about the efficacy of post hoc fact-checking (whether it corrects misconceptions), Canadian journalists and journalism educators appear committed to the practice because of normative and democratic ideals surrounding truth and information integrity. Moreover
This paper investigates the transformative role of Artificial Intelligence (AI) in global communication with a view to minimising misinformation, disinformation, cultural diversity and fostering global understanding. The article examines whether the integration of artificial intelligence (AI) in the communication process offers solutions to bridge cultural and linguistic gaps, mitigate perception bottlenecks, and foster global understanding. The study adopts a conceptual review method, which involves a systematic examination of existing literature, research studies, and relevant information in the communication field. The study reveals that AI technologies, via content moderation, fact-checking algorithms, language translation tools, and cultural sensitivity enhancements, have shown significant potential in combating misinformation and disinformation, thereby fostering a more informed global community. Furthermore, it is found that AI applications have also been found to promote cultural diversity by enabling more accurate and inclusive communication across various languages and cultural contexts. In addition, the paper finds that AI-driven communication strategies have been instrumental in enhancing global understanding by facilitating cross-cultural exchanges and mitigating biases in information dissemination. Finally, it is discovered that AI technologies still have some limitations in global communication. Therefore, the study recommends that policymakers, researchers, and practitioners should continue to explore and harness the transformative potential of AI in enhancing global communication processes by leveraging AI technologies in a responsible and ethical manner, to pave the way for a more inclusive, informed, and interconnected global society
As a research tradition, participatory design (PD) tends to focus on power dynamics where researchers hold greater power than participants. This paper uses design fiction to consider what this tendency overlooks by examining settings where participants may exist in multiple power relationships simultaneously implicated by the research, specifically focusing on the contexts of misinformation, disinformation, and online hate (M/D/OH). Drawing from existing literature in M/D/OH, we present a series of imaginary method abstracts that prompt questions for researchers to reflect on as they adapt PD techniques for new, different contexts. We highlight three value tensions—authenticity, reciprocity, and impact—integral to sustaining a concern for responsibility in PD scholarship. We end with reflections and potential considerations for responsibly applying PD and design fiction methods in M/D/OH settings.
Recently, the viral propagation of mis/disinformation has raised significant concerns from both academia and industry. This problem is particularly difficult because on the one hand, rapidly evolving technology makes it much cheaper and easier to manipulate and propagate social media information. On the other hand, the complexity of human psychology and sociology makes the understanding, prediction and prevention of users' involvement in mis/disinformation propagation very difficult. This themed series on "Multi-Disciplinary Dis/Misinformation Analysis and Countermeasures" aims to bring the attention and efforts from researchers in relevant disciplines together to tackle this challenging problem. In addition, on October 20th, 2021, and March 7th 2022, some of the guest editorial team members organized two panel discussions on "Social Media Disinformation and its Impact on Public Health During the COVID-19 Pandemic," and on "Dis/Misinformation Analysis and Countermeasures - A Computational Viewpoint." This article summarizes the key discussion items at these two panels and hopes to shed light on the future directions.
In today's digital age, conspiracies and information campaigns can emerge rapidly and erode social and democratic cohesion. While recent deep learning approaches have made progress in modeling engagement through language and propagation models, they struggle with irregularly sampled data and early trajectory assessment. We present IC-Mamba, a novel state space model that forecasts social media engagement by modeling interval-censored data with integrated temporal embeddings. Our model excels at predicting engagement patterns within the crucial first 15-30 minutes of posting (RMSE 0.118-0.143), enabling rapid assessment of content reach. By incorporating interval-censored modeling into the state space framework, IC-Mamba captures fine-grained temporal dynamics of engagement growth, achieving a 4.72% improvement over state-of-the-art across multiple engagement metrics (likes, shares, comments, and emojis). Our experiments demonstrate IC-Mamba's effectiveness in forecasting both post-level dynamics and broader narrative patterns (F1 0.508-0.751 for narrative-level predictions). The model maintains strong predictive performance across extended time horizons, successfully forecasting opinion-level engagement up to 28 days ahead using observation windows of 3-10 days. These capabilities enable earlier identification of potentially problematic content, providing crucial lead time for designing and implementing countermeasures. Code is available at: https://github.com/ltian678/ic-mamba. An interactive dashboard demonstrating our results is available at: https://ic-mamba.behavioral-ds.science/.
Misinformation and disinformation are receiving momentous global attention largely because of the risks they pose to almost every sector. Also, it deepening hate among ethnic groups, particularly, in Ghana and Nigeria. Lately, the most critical is the consistent manufactured lies in the semblance of news which have further threatened the fragile ethno-religious fabric in these two West African nations. In view of this, the study explores the intricate interconnection between misinformation, disinformation, and their impact on intensifying ethno-religious conflicts in Ghana and Nigeria. The propagation of inaccurate or deceptive information across various mediums has been observed to play a substantial role in exacerbating tensions, deepening divisions, and magnifying animosity among diverse ethnic communities. The primary objective of this research is to establish a wide-ranging comprehension of how misinformation and disinformation contribute to the escalation of ethno-religious conflicts, thereby shedding light on potential strategies to mitigate their detrimental consequences. Employing a qualitative approach of in-depth interview, the study uncovered the mechanisms through which misinformation and disinformation disseminate, shape perceptions, and contribute to the fragmentation of communities in Nigeria and Ghana unity. By highlighting these dynamics, the study seeks to offer valuable insights to policymakers, media professionals, and community leaders, enabling them to confront the predicament of misinformation and disinformation, ultimately cultivating a more unified and harmonious Nigerian and Ghanaian societies.
This article explores the use of screenshots as a form of visual evidence on social media platforms. It considers their role in YouTube videos that spread misinformation and disinformation about the Notre Dame Cathedral Fire and an internet hoax, the Momo Challenge. The article draws on two social semiotic frameworks, legitimation (Van Leeuwen in ‘Legitimation in discourse and communication, 2007) and affiliation (Knight in ‘Evaluating experience in funny ways’, 2013, and Zappavigna in ‘Searchable Talk and Social Media Metadiscourse’, 2018), to analyse how screenshots and accompanying voiceovers construe technological authority and propagate social values. Seven key forms of screenshots are identified in the dataset, alongside the key social bonds that are made visually salient in the screenshots. Overall, this research contributes to how we understand the role of screenshots in instances of misinformation and disinformation, highlighting the importance of identifying the affiliation potential of the screenshot in order to determine its veracity.
Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.
No abstract available
Abstract Internet and social media have become a widespread, large scale and easy to use platform for real-time information dissemination. It has become an open stage for discussion, ideology expression, knowledge dissemination, emotions and sentiment sharing. This platform is gaining tremendous attraction and a huge user base from all sections and age groups of society. The matter of concern is that up to what extent the contents that are circulating among all these platforms every second changing the mindset, perceptions and lives of billions of people are verified, authenticated and up to the standards. This paper puts forward a holistic view of how the information is being weaponized to fulfil the malicious motives and forcefully making a biased user perception about a person, event or firm. Further, a taxonomy is provided for the classification of malicious information content at different stages and prevalent technologies to cope up with this issue form origin, propagation, detection and containment stages. We also put forward a research gap and possible future research directions so that the web information content could be more reliable and safer to use for decision making as well as for knowledge sharing.
Along with the COVID-19 pandemic, an "infodemic" of false and misleading information has emerged and has complicated the COVID-19 response efforts. Social networking sites such as Facebook and Twitter have contributed largely to the spread of rumors, conspiracy theories, hate, xenophobia, racism, and prejudice. To combat the spread of fake news, researchers around the world have and are still making considerable efforts to build and share COVID-19 related research articles, models, and datasets. This paper releases "AraCOVID19-MFH"1a manually annotated multi-label Arabic COVID-19 fake news and hate speech detection dataset. Our dataset contains 10,828 Arabic tweets annotated with 10 different labels. The labels have been designed to consider some aspects relevant to the fact-checking task, such as the tweet's check worthiness, positivity/negativity, and factuality. To confirm our annotated dataset's practical utility, we used it to train and evaluate several classification models and reported the obtained results. Though the dataset is mainly designed for fake news detection, it can also be used for hate speech detection, opinion/news classification, dialect identification, and many other tasks. © 2021 Elsevier B.V.. All rights reserved.
In recent years, the world has witnessed a global outbreak of fake news, propaganda and disinformation (FNPD) flows on online social networks (OSN). In the context of information warfare and the capabilities of generative AI, FNPDs have proliferated. They have become a powerful and quite effective tool for influencing people’s social identities, attitudes, opinions and even behavior. Ad hoc malicious social media accounts and organized networks of trolls and bots target countries, societies, social groups, political campaigns and individuals. As a result, conspiracy theories, echo chambers, filter bubbles and other processes of fragmentation and marginalization are polarizing, radicalizing, and disintegrating society in terms of coherent politics, governance, and social networks of trust and cooperation. This systematic review aims to explore advances in using machine and deep learning to detect FNPD in OSNs effectively. We present the results of a combined PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) review in three analysis domains: 1) propagators (authors, trolls, and bots), 2) textual content, 3) social impact. This systemic research framework integrates meta-analyses of three research domains, providing an overview of the wider research field and revealing important relationships between these research domains. It not only addresses the most promising ML/DL research methodologies and hybrid approaches in each domain, but also provides perspectives and insights on future research directions.
Introduction The novel coronavirus (COVID-19) is characterised by loads of fake news and misinformation, which can influence vaccine acceptance. Implementing a harmonized public health strategy during an outbreak necessitates effective community engagement and communication, which facilitates public trust and decision-making. This study explored the role of community engagement in the acceptance of COVID-19 vaccine amid fake news and misinformation in two municipalities in Ghana. Method A case study design was employed using in-depth interviews with government officials from the Ghana Health Service, Municipal Assembly, Information Services Department and the National Commission on Civic Education and community gatekeepers. Additionally, focus group discussions were conducted with a cross-section of women, men and migrants’ community members to understand the role of community engagement in vaccine acceptance. Qualitative analysis software Nvivo 12 was used to support thematic coding and analysis. All ethical procedures and COVID-19 preventive protocols were observed. Results Study participants reported the sources of fake news and misinformation about the COVID-19 vaccines from interpersonal communication, the radio, and a popular anti-vaccine song. Some of the factors contributing to vaccine hesitancy were community members believed in the fake news and misinformation, low trust in the government and public institutions, and the lack of extensive education on COVID-19 vaccines. The Ghana Health Service was the most successful in engaging communities to promote vaccine acceptance amid fake news and misinformation. It leveraged on its existing community-based health planning and services (CHPS) programme, which engaged the communities frequently through routine programmes such as durbars, antenatal clinics, child welfare clinics, and other community programmes to carry out engagement. Conclusion Misinformation and fake news about COVID-19 vaccines were widespread in the study communities, with significant implications for vaccine hesitancy. The sources of misinformation ranged from social media platforms and radio broadcasts to personal interactions within communities. While government efforts at community engagement were noted, these efforts were often inadequate to counteract the deeply ingrained fears and misconceptions.
In the digital age, where information is a cornerstone for decision-making, social media's not-so-regulated environment has intensified the prevalence of fake news, with significant implications for both individuals and societies. This study employs a bibliometric analysis of a large corpus of 9678 publications spanning 2013–2022 to scrutinize the evolution of fake news research, identifying leading authors, institutions, and nations. Three thematic clusters emerge: Disinformation in social media, COVID-19-induced infodemics, and techno-scientific advancements in auto-detection. This work introduces three novel contributions: 1) a pioneering mapping of fake news research to Sustainable Development Goals (SDGs), indicating its influence on areas like health (SDG 3), peace (SDG 16), and industry (SDG 9); 2) the utilization of Prominence percentile metrics to discern critical and economically prioritized research areas, such as misinformation and object detection in deep learning; and 3) an evaluation of generative AI's role in the propagation and realism of fake news, raising pressing ethical concerns. These contributions collectively provide a comprehensive overview of the current state and future trajectories of fake news research, offering valuable insights for academia, policymakers, and industry.
No abstract available
Fake news on social media spreads faster and has become a major societal concern, prompting numerous publications and knowledge sharing among researchers. This research aims to understand the shifting nature of fake news by investigating the citation relationships between significant publications using key route main path analysis (MPA). The process involves generating keywords, collecting and selecting relevant data, and conducting MPA on fake news in social media. The study analyzes 4.057 publications from 2010 to 2023, identifying 27 influential works shaping the knowledge diffusion in fake news research. Findings reveal two main phases: understanding fake news consumption patterns and analyzing its dissemination and detection mechanisms. Through multiple‐global MPA, five research trends are identified: health misinformation, fact‐checking, sharing behavior, fake news recognition, and physiological interventions. The study shows a continuous rise in publications and citations, with current trends focusing on health‐related misinformation. This analysis offers insights into the development and diffusion of fake news topics on social media, emphasizing the importance of historical development in guiding future research by uncovering current trends. Highlighting the historical progression of research provides valuable context, enabling a more nuanced understanding of the field.
Fake news is an explosive subject, being undoubtedly among the most controversial and difficult challenges facing society in the present-day environment of technology and information, which greatly affects the individuals who are vulnerable and easily influenced, shaping their decisions, actions, and even beliefs. In the course of discussing the gravity and dissemination of the fake news phenomenon, this article aims to clarify the distinctions between fake news, misinformation, and disinformation, along with conducting a thorough analysis of the most widely read academic papers that have tackled the topic of fake news research using various machine learning techniques. Utilizing specific keywords for dataset extraction from Clarivate Analytics’ Web of Science Core Collection, the bibliometric analysis spans six years, offering valuable insights aimed at identifying key trends, methodologies, and notable strategies within this multidisciplinary field. The analysis encompasses the examination of prolific authors, prominent journals, collaborative efforts, prior publications, covered subjects, keywords, bigrams, trigrams, theme maps, co-occurrence networks, and various other relevant topics. One noteworthy aspect related to the extracted dataset is the remarkable growth rate observed in association with the analyzed subject, indicating an impressive increase of 179.31%. The growth rate value, coupled with the relatively short timeframe, further emphasizes the research community’s keen interest in this subject. In light of these findings, the paper draws attention to key contributions and gaps in the existing literature, providing researchers and decision-makers innovative viewpoints and perspectives on the ongoing battle against the spread of fake news in the age of information.
INTRODUCTION The increasing number of people who use drugs (PWUDs) can be attributed to the rising online sales of drugs and other related substances. Information on drugs and drug markets has also become easily accessible in web-search engines and social media. Aside from providing direct care, nurses have essential roles in preventing substance use disorder. These roles include health education, liaison, and researcher. Thus, nurses must examine and utilize the Internet, where information and transactions related to these substances are increasing. DESIGN/METHODS This study utilized an infodemiological design in exploring the worldwide information utilization for substance use disorder. Data were gathered from Google Trends and Wikimedia Pageview. The data included relative search volumes (RSV), top and rising related queries and topics, and Wikipedia page views between 2004 and 2022. After describing the data, autoregressive integrated mean averaging (ARIMA) models were used to predict future utilization of online information from Google and Wikipedia. RESULTS Google trends ranked 37 countries based on the search volumes for substance use disorder. Ethiopia, Finland, the United States, Kenya, and Canada have the highest RSVs, while the lowest-ranked country is Turkey, followed by Mexico, Spain, Japan, and Indonesia. Google searches for substance use disorder-related information increased by more than 900% between 2004 and 2022. In addition, Wikipedia page views for substance use disorder-related information increased by almost 200% between 2015 and 2022. Based on the ARIMA models, RSVs and page views are predicted to increase by about 150% and 120% by December 2025. Top and rising search-related topics and queries revealed that the public increasingly utilized online information to understand specific substances and the possible mental health comorbidities related to substance use disorders. Their recent concerns revolved around diagnostics, specific substances, and specific disorders. CONCLUSION The Internet can be of paradoxical use in substance use disorder. It has been previously reported to be increasingly used in drug trades, contributing to the increasing prevalence of substance use disorder. Likewise, the present study's findings revealed that it is increasingly utilized for substance use disorder-related information. Thus, nurses and other healthcare professionals should ensure that online information regarding substance use disorders is accurate and up-to-date. CLINICAL RELEVANCE Nurse informaticists can form and lead Internet- and social-media-based health teams that perform national infodemiological investigations to assess online information. In doing so, they can inform, expand, and contextualize ehealth substance use education and strengthen the accessibility and delivery of substance use healthcare. In addition, public health nurses can collaborate to engage patients and communities in identifying harmful substance use disorder information online and creating culturally-appropriate messages that will correct misinformation and improve ehealth literacy, specifically in substance use disorder.
Autism spectrum disorder and co-occurring symptoms often require lifelong services. However, access to autism spectrum disorder services is hindered by a lack of available autism spectrum disorder providers. We utilized geographic information systems methods to map autism spectrum disorder provider locations in Michigan. We hypothesized that (1) fewer providers would be located in less versus more populated areas; (2) neighborhoods with low versus high socioeconomic status would have fewer autism spectrum disorder providers; and (3) an interaction would be found between population and socioeconomic status such that neighborhoods with low socioeconomic status and high population would have few available autism spectrum disorder providers. We compiled a list of autism spectrum disorder providers in Michigan, geocoded the location of providers, and used network analysis to assess autism spectrum disorder service availability in relation to population distribution, socioeconomic disadvantage, urbanicity, and immobility. Hypotheses were supported. Individuals in rural neighborhoods had fewer available autism spectrum disorder providers than individuals in suburban and urban neighborhoods. In addition, neighborhoods with greater socioeconomic status disadvantage had fewer autism spectrum disorder providers available. Finally, statistically significant spatial disparities were found; wealthier suburbs had good provider availability while few providers were available in poorer, urban neighborhoods. Knowing autism spectrum disorder providers’ availability, and neighborhoods that are service deserts, presents the opportunity to utilize evidence-based dissemination and implementation strategies that promote increased autism spectrum disorder providers for underserved individuals. Lay abstract Autism spectrum disorder and co-occurring symptoms often require lifelong services. However, access to autism spectrum disorder services is hindered by a lack of available autism spectrum disorder providers. We utilized geographic information systems methods to map autism spectrum disorder provider locations in Michigan. We hypothesized that (1) fewer providers would be located in less versus more populated areas; (2) neighborhoods with low versus high socioeconomic status would have fewer autism spectrum disorder providers; and (3) an interaction would be found between population and socioeconomic status such that neighborhoods with low socioeconomic status and high population would have few available autism spectrum disorder providers. We compiled a list of autism spectrum disorder providers in Michigan, geocoded the location of providers, and used network analysis to assess autism spectrum disorder service availability in relation to population distribution, socioeconomic disadvantage, urbanicity, and immobility. Individuals in rural neighborhoods had fewer available autism spectrum disorder providers than individuals in suburban and urban neighborhoods. In addition, neighborhoods with greater socioeconomic status disadvantage had fewer autism spectrum disorder providers available. Finally, wealthier suburbs had good provider availability while few providers were available in poorer, urban neighborhoods. Knowing autism spectrum disorder providers’ availability, and neighborhoods that are particularly poorly serviced, presents the opportunity to utilize evidence-based dissemination and implementation strategies that promote increased autism spectrum disorder providers for underserved individuals.
The enduring question of whether grief can ever be pathological (and, if so, when) has been shrouding mental health and psychiatric care over the last few years. While this discussion extends beyond the confines of psychiatry to encompass contributions from diverse disciplines such as Anthropology, Sociology, and Philosophy, scrutiny has been mainly directed toward psychiatry for its purported inclination to pathologize grief—an unavoidable facet of the human experience. This critique has gained particular salience considering the formal inclusion of prolonged grief disorder (PGD) in the 11th edition of the International Classification of Diseases (ICD-11) and the subsequent Text Revision 5th Edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR). This study contends that the inclusion of prolonged grief disorder as a diagnostic entity may be excessively rooted in Western cultural perspectives and empirical data, neglecting the nuanced variations in the expression and interpretation of grief across different cultural contexts. The formalization of this disorder not only raises questions about its universality and validity but also poses challenges to transcultural psychiatry, due to poor representation in empirical research and increased risk of misdiagnosis. Additionally, it exacerbates the ongoing concerns related to normativism and the lack of genuine cultural relativism within the DSM. Furthermore, the passionate discussion surrounding the existence, or not, of disordered forms of grief may actually impede effective care for individuals genuinely grappling with pathological forms of grief. In light of these considerations, this study proposes that prolonged grief disorder should be approached as a diagnostic category with potential Western cultural bias until comprehensive cross-cultural studies, conducted in diverse settings, can either substantiate or refute its broader applicability. This recalibration is imperative for advancing a more inclusive and culturally sensitive understanding of grief within the field of psychiatry.
OBJECTIVE Recent public awareness of racial and ethnic disparities has again brought to light issues of diversity, equity, and inclusion in the eating disorders field. However, empirical information on racial and ethnic representation in eating disorders research is limited, making it difficult to understand where improvements are needed. METHOD This study reviewed all studies including human participants published in the International Journal of Eating Disorders in 2000, 2010, and 2020. Differences in likelihood of reporting race and ethnicity were calculated based on study year, location, and diagnostic categories. RESULTS Out of 377 manuscripts, 45.2% reported information on the race and ethnicity of study participants. Studies conducted in the United States were more likely to report (128/173), and those conducted in Europe were less likely to report (5/61) on race and ethnicity than those conducted outside of those regions. Rates of reporting increased from 2000 to 2020. White participants made up approximately 70% of the samples that reported race and ethnicity data. Hispanic participants made up approximately 10% of samples reporting race and ethnicity. Participants from all other races and ethnicities made up less than 5% each. DISCUSSION Although rates of reporting race and ethnicity increased over time, most participants were White. Rates of reporting also differed by the geographical region, which may reflect variability in how information on race and ethnicity is collected across countries. More attention toward capturing the cultural background of research participants and more inclusivity in research are needed in the eating disorders field.
Abstract Aims There is currently little nationally representative diagnostic data available to quantify how many Aboriginal and Torres Strait Islander people may need a mental health service in any given year. Without such information, health service planners must rely on less direct indicators of need such as service utilisation. The aim of this paper is to provide a starting point by estimating the prevalence ratio of 12-month common mental disorders (i.e. mood and anxiety disorders) for Indigenous peoples compared to the general Australian population. Methods Analysis of the four most recent Australian Indigenous and corresponding general population surveys was undertaken. Kessler-5 summary scores by 10-year age group were computed as weighted percentages with corresponding 95% confidence intervals. A series of meta-analyses were conducted to pool prevalence ratios of Indigenous to general population significant psychological distress by 10-year age groups. The proportion of respondents with self-reported clinician diagnoses of mental disorders was also extracted from the most recent survey iterations. Results Indigenous Australians are estimated to have between 1.6 and 3.3 times the national prevalence of anxiety and mood disorders. Sensitivity analyses found that the prevalence ratios did not vary across age group or survey wave. Conclusions To combat the current landscape of inequitable mental health in Australia, priority should be given to populations in need, such as Indigenous Australians. Having a clear idea of the current level of need for mental health services will allow planners to make informed decisions to ensure adequate services are available.
This study explored parents’ strategies for home educating their children with Autism Spectrum Disorder (ASD) during the COVID-19 period in Harare Urban District in Zimbabwe. Embedded within international research findings on the subject, this qualitative study drew on a purposive sample of eight parents. Telephonic individual interviews, information sheets, and field notes were used to collect data. A constant comparative approach of data organization with continuous adjustment was used throughout the analysis in order to guarantee that codes captured the range of ideas of the parents. Parents committedly home educated children with ASD in collaboration and discourse with their family members and peer parents. Complementary and supplementary roles of parents and family members in the home education of their children with ASD facilitated the transition of these children from school to home routine activities. Parents of children with ASD fostered in these children an awareness of the new social reality of the COVID-19 period and the safety precautions. This study offers insights regarding parents’ strategies for home educating their children with ASD during the COVID-19 period.
The purpose was to assess prevalence of suicidality, depression, post-traumatic stress disorder (PTSD), and anxiety among female sex workers (FSW). A systematic review and meta-analysis was performed. Search strategy was performed in MEDLINE, Scopus, Web of Science, EMBASE, Ovid and Cochrane Central Database from inception until March 2020. Considered for inclusion were cross-sectional studies performed on FSW that assessed prevalence of any of the following: suicide attempt or suicidal ideation, depression, PTSD, or anxiety. Five reviewers, independently and in duplicate, selected all eligible articles in an abstract and full-text screening phase and, moreover, extracted information from each study. A binomial-normal generalized linear mixed model was employed to estimate prevalence of the conditions. From 8035 studies yielded in the search strategy, 55 were included for analysis. The overall prevalence of suicidal ideation and attempt was 27% (95% C.I. 18–39%) and 20% (95% C.I. 13–28%), respectively. Furthermore, overall prevalence of depression and PTSD was 44% (95% C.I. 35–54%) and 29% (95% C.I. 18–44%), respectively. Eleven studies were classified as high quality. Findings indicate that there is an overall high prevalence of suicidality, depression, and PTSD among FSW. Development of accessible large-scale interventions that assess mental health among this population remains critical.
The wars in the Democratic Republic of Congo have left indelible marks on the mental health and functioning of the Congolese civilians that sought refuge in Uganda. Even though it is clear that civilians who are exposed to potentially traumatizing events in war and conflict areas develop trauma-related mental health problems, scholarly information on gender differences on exposure to different war-related traumatic events, their conditional risks to developing PTSD and whether the cumulative exposure to traumatic events affects men and women differently is still scanty. In total, 325 (n = 143 males, n = 182 females) Congolese refugees who lived in Nakivale, a refugee settlement in the Southwestern part of Uganda were interviewed within a year after their arrival. Assessment included exposure to war-related traumatic events, and DSM-IV PTSD symptom severity. Our main findings were that refugees were highly exposed to war-related traumatic events with experiencing dangerous flight as the most common event for both men (97%) and women (97%). The overall high prevalence of PTSD differed among women (94%) and men (84%). The highest conditional prevalence of PTSD in women was associated with experiencing rape. The dose-response effect differed significantly between men and women with women showing higher PTSD symptom severity when experiencing low and moderate levels of potentially traumatizing event types. In conflict areas, civilians are highly exposed to different types of war-related traumatic events that expose them to high levels of PTSD symptoms, particularly women. Interventions focused at reducing mental health problems resulting from war should take the context of gender into consideration.
Link prediction is a technique to forecast future new or missing relationships between entities based on the current network information. Graph theory and network science are theoretical concepts that have influenced the link prediction research. Although previous reviews clearly outlined the link prediction research, it was focused on describing prediction approaches only. However, analysis of related studies identified other components that influence link prediction. This review aims to present a continued review and introduce the taxonomy of link prediction using three main components: the prediction approaches, prediction features, and prediction measurements. Each component has been detailed using its own taxonomy available at the present review. Furthermore, this review compares the prediction approaches and prediction features also benchmark algorithms and measurement methods of previous link prediction studies. In conclusion, the previous studies mostly focused on structural features and similarity-based approaches, while measuring the proposed methods using the Area Under the Curve (AUC) score. The proposed link prediction taxonomy can guide the researchers to generate new ideas and innovations that contribute to the link prediction research.
BACKGROUND Medications for opioid use disorder, including methadone, combined with comprehensive wraparound services, are the gold standard for treatment in pregnancy. Higher methadone doses are associated with treatment retention in pregnancy and relapse prevention. Given known inequities where individuals of color tend to be prescribed lower doses of opioids for other conditions, the purpose of this study was to determine whether there is racial inequity in methadone dose at delivery in pregnant women with opioid use disorder. METHODS Retrospective review of medical charts identified pregnant women (N = 339) treated with methadone for opioid use disorder during pregnancy at one center from 2012 to 2017. Variables extracted from medical records included race, demographic and relevant clinical information (e.g., methadone dose at delivery, height, weight, etc.). Analyses used simple and multiple linear regressions to determine associations between these characteristics and methadone dose at delivery. RESULTS The mean methadone doses at delivery among women of color and white women were 105.8 mg and 144.9 mg, respectively (p < .0001). After adjusting for maternal age, gestational age at delivery, body mass index, type of opioid used, and parity, race was significantly and independently associated with methadone dose at delivery, with women of color receiving 36.2 mg less than white women (p = .0003). CONCLUSIONS Pregnant women of color with opioid use disorder received 67% of the dose of methadone at delivery that white women received. Antiracist responses to prevent provider bias in evaluating dose needs are needed to correct this inequity and prevent undertreatment of opioid use disorder among women of color.
We examined the content of tweets on the social media site Twitter to better understand the contemporary discourse about medications for opioid use disorder (MOUD), how this chat contributes to the pervasive underpinnings of drug addiction, chronic pain stigma, and the impact it has on demand and availability of treatment. A retrospective review of tweets over 3 months containing keywords buprenorphine, naltrexone, methadone, or bupe was conducted resulting in 5,068 tweets. A content analysis was carried out focusing on a subset of tweets. Themes emerged from including suspicion and conspiracy theories about MOUD, and frustration and lack of control over their treatment options. Other tweets shared stigmatizing language and attitudes related to OUD/MOUD (e.g., “Junkies”). Twitter is a rich source of data reflecting thoughts, opinions, and sentiments entities regarding MOUD. However, this information can contain malicious comments that perpetuate stigma for people with OUD and result in avoidance of treatment.
BACKGROUND Low and middle income countries (LMICs) not only have the majority of the world's population but also the largest proportion of youth. Poverty, civil conflict and environmental stressors tend to be endemic in these countries and contribute to significant psychiatric morbidity, including depression, anxiety and post-traumatic stress disorder (PTSD). However, mental health data from LMICs is scarce, particularly data on youth. Evaluation of such information is crucial for planning services and reducing the burden of disability. This paper reviews the published data on the prevalence and randomized trials of interventions for depression, anxiety and PTSD in youth in LMICs. METHODS PubMed and Google Scholar were searched for articles published in English up to January 2017, using the keywords: Low/middle income country, depression, anxiety, post-traumatic stress disorder, child, youth, adolescent, prevalence, treatment, intervention, and outcomes. RESULTS The few prevalence studies in LMICs reported rates of up to 28% for significant symptoms of depression or anxiety among youth, and up to 87% for symptoms of PTSD among youth exposed to traumatic experienences, though these rates varied widely depending on several factors, including the assessments tools used. Most rigorous interventions employed some form or variation of CBT, with mixed results. Studies using other forms of psychosocial interventions appear to be heterogeneous and less rigorous. CONCLUSIONS The mental health burden due to depression and anxiety disorders in youth is substantial in LMICs, with high needs but inadequate services. Youth specific services for early detection and cost-effective interventions are needed.
本报告综合了虚假信息研究的多个核心维度:首先,在理论层面确立了“信息紊乱”框架下的概念界定与分类学体系;其次,深入探讨了生成式AI与深度伪造技术带来的新型安全挑战;第三,展示了利用机器学习和网络分析进行自动化检测的技术前沿;第四,通过医疗与公共卫生领域的实证研究揭示了虚假信息的社会危害;最后,从治理角度评估了事实核查机制的效能,并分析了用户分享动机与跨文化背景下的群体易感性。整体研究呈现出从理论澄清向技术治理与社会心理干预并重的跨学科演进趋势。