disinformation
基于机器学习与大语言模型的自动化识别技术
该组文献聚焦于利用自然语言处理(NLP)、深度学习(如BERT, RoBERTa, LSTM)和图神经网络开发自动化检测系统。研究重点包括多语言数据集建设(德语、阿拉伯语、印尼语等)、特征工程(如TF-IDF、词嵌入)、可解释性AI以及利用大语言模型(GPT-4等)识别政治宣传与虚假新闻的准确性评估。
- Disinformation Detection: A review of linguistic feature selection and classification models in news veracity assessments(Jill Tompkins, 2019, ArXiv)
- Unmasking Disinformation: Advanced Techniques for Fake News Detection and Mitigation(S. Srivastava, Nitasha, Akansha, Mudit Surana, Hardik Singh, Harsh Sangtani, 2023, International Journal for Research in Applied Science and Engineering Technology)
- GerDISDETECT: A German Multilabel Dataset for Disinformation Detection(Mina Schütz, Daniela Pisoiu, Daria Liakhovets, Alexander Schindler, Melanie Siegel, 2024, No journal)
- Detection of Disinformation on Social Platforms: A Review of Computational Approaches and Challenges(Duman Telman, A. Yerimbetova, E. Daiyrbayeva, M. Sambetbayeva, Bayangali Abdygalym, Almas Turganbayev, 2025, 2025 10th International Conference on Computer Science and Engineering (UBMK))
- Detection of False Information on Social Media(Vanshika Dureja, Sarvesh Tanwar, 2024, 2024 11th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO))
- Debunking Disinformation: Revolutionizing Truth with NLP in Fake News Detection(Li He, Siyi Hu, Ailun Pei, 2023, ArXiv)
- Knowledge Graphs and Machine Learning in Fake News and Disinformation Detection(Anastasios Manos, D. E. Filippidou, Nikolaos Pavlidis, Georgios Karanasios, Georgios Vachtanidis, Arianna D’Ulizia, Alessia D’Andrea, 2024, 2024 International Conference on Engineering and Emerging Technologies (ICEET))
- Smart Tool for Text Content Analysis to Identify Key Propaganda Narratives and Disinformation in News Based on NLP and Machine Learning(Maryna Nyzova, Victoria Vysotska, Lyubomyr Chyrun, Zhengbing Hu, Yuriy Ushenko, Dmytro Uhryn, 2025, International Journal of Computer Network and Information Security)
- Genesis Point Detection: Identifying and Neutralizing Disinformation at its Origin(Srushti Deshmukh, 2026, International Journal for Research in Applied Science and Engineering Technology)
- Enhancing Society-Undermining Disinformation Detection through Fine-Grained Sentiment Analysis Pre-Finetuning(T. Pan, Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen, 2024, No journal)
- Russo-Ukrainian war disinformation detection in suspicious Telegram channels(Anton Bazdyrev, 2025, ArXiv)
- INFORMATION TECHNOLOGY FOR RECOGNIZING PROPAGANDA, FAKES AND DISINFORMATION IN TEXTUAL CONTENT BASED ON NLP AND MACHINE LEARNING METHODS(V. Vysotska, 2024, Radio Electronics, Computer Science, Control)
- Automating disinformation detection: the challenges from a social science perspective(Kirsty Park, 2021, No journal)
- Survey on Deep Learning for Misinformation Detection: Adapting to Recent Events, Multilingual Challenges, and Future Visions(Ansam Khraisat, L. Chang, J. Abawajy, 2025, Social Science Computer Review)
- Research on false propaganda detection technology based on LLM and BERT(Lidong Xing, Nannan Hou, Zhiqin Zhang, Ke Li, Fangxu Meng, 2025, No journal)
- Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or Something Else?(Alexander Michael Daniel, 2021, ArXiv)
- ProST: spotting propaganda span and technique classification in news articles(Pir Noman Ahmad, Adnan Muhammad Shah, Jiequn Guo, Yuanchao Liu, 2025, Aslib Journal of Information Management)
- Disinformation Detection on 2024 Indonesia Presidential Election using IndoBERT(Andhika Putra, Yuliant Sibaroni, Yuliant Sibaroni, 2023, 2023 International Conference on Data Science and Its Applications (ICoDSA))
- A Unified Graph-Based Approach to Disinformation Detection using Contextual and Semantic Relations(Marius Paraschiv, Nikos Salamanos, Costas Iordanou, Nikolaos Laoutaris, Michael Sirivianos, 2021, No journal)
- Are Large Language Models Good at Detecting Propaganda?(Julia Jose, Rachel Greenstadt, 2025, ArXiv)
- Can AI Outsmart Fake News? Detecting Misinformation with AI Models in Real-Time(Gregory Gondwe, 2025, Emerging Media)
- Detecting Propaganda in News Articles Using Large Language Models(2024, Engineering: Open Access)
- Can GPT-4 Identify Propaganda? Annotation and Detection of Propaganda Spans in News Articles(Maram Hasanain, Fatema Ahmed, Firoj Alam, 2024, No journal)
- Hybrid Annotation for Propaganda Detection: Integrating LLM Pre-Annotations with Human Intelligence(Ariana Sahitaj, Premtim Sahitaj, Veronika Solopova, Jiaao Li, Sebastian Möller, Vera Schmitt, 2025, ArXiv)
- Method for neural network detecting propaganda techniques by markers with visual analytic(I. Krak, O. Zalutska, Maryna Molchanova, O. Mazurets, E. Manziuk, O. Barmak, 2024, No journal)
- Fake news detection: a systematic literature review of machine learning algorithms and datasets(Humberto Fernandes Villela, Fábio Corrêa, Jurema Suely de Araújo Nery Ribeiro, Air Rabelo, D. B. F. Carvalho, 2023, J. Interact. Syst.)
- The Impact of Stopwords Removal on Disinformation Detection in Ukrainian language during Russian-Ukrainian war(Halyna Padalko, Vasyl Chomko, Dmytro Chumachenko, 2024, No journal)
- The silence of the LLMs: Cross-lingual analysis of guardrail-related political bias and false information prevalence in ChatGPT, Google Bard (Gemini), and Bing Chat(Aleksandra Urman, M. Makhortykh, 2024, Telematics Informatics)
- There are N Impostors Among Us: Understanding the Effect of State-Sponsored Troll Accounts on Reddit Discussions(Mohammad Hammas Saeed, Jeremy Blackburn, G. Stringhini, 2022, No journal)
- Toward Mitigating Misinformation and Social Media Manipulation in LLM Era(Yizhou Zhang, Karishma Sharma, Lun Du, Yan Liu, 2024, Companion Proceedings of the ACM Web Conference 2024)
- Enhanced Propaganda Detection in Public Social Media Discussions Using a Fine-Tuned Deep Learning Model: A Diffusion of Innovation Perspective(Pir Noman Ahmad, Adnan Muhammad Shah, Kangyoon Lee, 2025, Future Internet)
- DISCO: Comprehensive and Explainable Disinformation Detection(Dongqi Fu, Yikun Ban, Hanghang Tong, R. Maciejewski, Jingrui He, 2022, Proceedings of the 31st ACM International Conference on Information & Knowledge Management)
- Comparative study of predictive models for hoax and disinformation detection in indonesian news(Nadia Paramita Retno Adiati, Dimas Febriyan Priambodo, Girinoto Girinoto, S. Indarjani, Akhmad Rizal, Arga Prayoga, Yehezikha Beatrix, 2024, International Journal of Advances in Intelligent Informatics)
- HyperGraphDis: Leveraging Hypergraphs for Contextual and Social-Based Disinformation Detection(Nikos Salamanos, Pantelitsa Leonidou, Nikolaos Laoutaris, Michael Sirivianos, M. Aspri, Marius Paraschiv, 2023, ArXiv)
- MultiProSE: A Multi-label Arabic Dataset for Propaganda, Sentiment, and Emotion Detection(Lubna Al-Henaki, H. Al-Khalifa, A. Al-Salman, Hajar Alqubayshi, Hind M. Al-Otaibi, Gheeda Alghamdi, Hawra Aljasim, 2025, No journal)
- An Explainable XGBoost-based Approach on Assessing Detection of Deception and Disinformation(Alex V. Mbaziira, M. Sabir, 2024, ArXiv)
- DISINFORMATION DETECTION IN THE MEDICAL DOMAIN: CURRENT APPROACHES, LIMITATIONS, AND FUTURE DIRECTIONS(Vagif Mammadaliyev, Vusal Shahbazov, 2026, Problems of Information Society)
- Multilingual Propaganda Detection: Exploring Transformer-Based Models mBERT, XLM-RoBERTa, and mT5(Mohamed Ibrahim Ragab, Ensaf Hussein Mohamed, Walaa Medhat, 2025, No journal)
- Nexus at ArAIEval Shared Task: Fine-Tuning Arabic Language Models for Propaganda and Disinformation Detection(Yunze Xiao, Firoj Alam, 2023, No journal)
- Ensemble-based fake news and disinformation detection using crowdsourced dataset(G. Kątek, Marta Gackowska-Katek, R. Kozik, Aleksandra Pawlicka, M. Pawlicki, M. Choraś, Ryszard S. Choras, 2026, Log. J. IGPL)
- True or False? Detecting False Information on Social Media Using Graph Neural Networks(Samyo Rode-Hasinger, Anna M. Kruspe, X. Zhu, 2022, No journal)
- Check-It: A plugin for detecting fake news on the web(Demetris Paschalides, Chrysovalantis Christodoulou, Kalia Orphanou, R. Andreou, Alexandros Kornilakis, G. Pallis, M. Dikaiakos, E. Markatos, 2021, Online Soc. Networks Media)
- Data Augmentation for Hoax Detection through the Method of Convolutional Neural Network in Indonesian News(Atik Zilziana Muflihati Noor, Rahmat Gernowo, O. Nurhayati, 2023, Jurnal Penelitian Pendidikan IPA)
- DeFaktS: A German Dataset for Fine-Grained Disinformation Detection through Social Media Framing(Shaina Ashraf, Isabel Bezzaoui, Ionut Andone, Alexander Markowetz, Jonas Fegert, Lucie Flek, 2024, No journal)
- Detection of Propaganda and Bias in Social Media: A Case Study of the Israel-Gaza War (2023)(Tasneem Duridi, Lour Atwe, Areej Jaber, Eman Daraghmi, Paloma Martínez, 2025, 2025 International Conference on New Trends in Computing Sciences (ICTCS))
- AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks(A. K. Pakina, M. Pujari, D. Kejriwal, A. K. Pakina, Ashwin Sharma, 2025, International Journal Science and Technology)
- Sentiment and Objectivity in Iranian State-Sponsored Propaganda on Twitter(Michael Barrows, E. Haig, Dara Conduit, 2024, IEEE Transactions on Computational Social Systems)
- Fake or not? Automated detection of COVID-19 misinformation and disinformation in social networks and digital media(I. Alsmadi, N. Rice, Michael J. O'Brien, 2022, Computational and Mathematical Organization Theory)
- Advanced Machine Learning Techniques for Fake News (Online Disinformation) Detection: A Systematic Mapping Study(M. Choraś, K. Demestichas, Agata Giełczyk, Álvaro Herrero, Pawel Ksieniewicz, K. Remoundou, Daniel Urda, M. Woźniak, 2020, ArXiv)
- Fake News in Social Media – Classification and Case Studies as an Important Guide for Media Education(Gabriel Radko, 2025, Journal of Education, Technology and Computer Science)
- PropaInsight: Toward Deeper Understanding of Propaganda in Terms of Techniques, Appeals, and Intent(Jiateng Liu, Lin Ai, Zizhou Liu, Payam Karisani, Zheng Hui, May Fung, Preslav Nakov, Julia Hirschberg, Heng Ji, 2024, No journal)
生成式AI与Deepfake:武器化威胁及其法律伦理挑战
这组论文探讨了生成式AI(尤其是音视频深度伪造)如何被武器化以制造极具说服力的虚假内容。研究涵盖了AI生成宣传的劝服力、Deepfake在选举干扰和战争中的应用,以及针对这些合成威胁的法律规制框架(印度、印尼、欧盟等)和取证防御技术的滞后性问题。
- Technologia deepfake w kampanii parlamentarnej w Polsce w 2023 roku(Ilona Dąbrowska, 2024, Studia Politologiczne)
- Immunizing the Public Against AI-Generated Disinformation: Testing the Effects of Inoculation Mode and Issue Attitude on Inoculation Likelihood of Political Deepfakes(Bingbing Zhang, Sang Jung Kim, Alex Scott, 2025, Journalism & Mass Communication Quarterly)
- Rosyjska dezinformacja i wykorzystanie obrazów generowanych przez sztuczną inteligencję (deepfake) w pierwszym roku inwazji na Ukrainę(A. Majchrzak, 2023, Media Biznes Kultura)
- False failures, real distrust: the impact of an infrastructure failure deepfake on government trust(Saifuddin Ahmed, Muhammad Masood, Adeline Wei Ting Bee, Kei Ichikawa, 2025, Frontiers in Psychology)
- How persuasive is AI-generated propaganda?(Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz, 2024, PNAS Nexus)
- Disinformation in the digital era: The role of deepfakes, artificial intelligence, and open-source intelligence in shaping public trust and policy responses(A. Y. Balogun, Adegbenga Ismaila Alao, O. Olaniyi, 2025, Computer Science & IT Research Journal)
- Deepfake Label Recall: Combating Disinformation with Labels is Especially Effective for Those Who Dislike the Speaker(William I. MacKenzie, Ryan Weber, Hannah M. Barr, Candice L. Lanius, N. Tenhundfeld, 2025, International Journal of Human–Computer Interaction)
- The future of online trust (and why Deepfake is advancing it)(H. Etienne, 2021, Ai and Ethics)
- Deepfake Technology and the Rise of Misinformation(Vamsi Koneti, 2025, XRDS: Crossroads, The ACM Magazine for Students)
- DeepFake Deception: A Comprehensive Analysis of DeepFake Technology and its Effects on Ethics, Politics and Society(Tanvi S Achyut, 2023, INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- AI Propaganda factories with language models(Lukasz Olejnik, 2025, ArXiv)
- A Step Towards Modern Disinformation Detection: Novel Methods for Detecting LLM-Generated Text(Samuel Nathanson, Yungjun Yoo, David Na, Yinzhi Cao, Lanier A. Watkins, 2024, MILCOM 2024 - 2024 IEEE Military Communications Conference (MILCOM))
- Beyond the deepfake problem: Benefits, risks and regulation of generative AI screen technologies(Anna Broinowski, Fiona R. Martin, 2024, Media International Australia)
- DEEPFAKE, PROPAGANDA, DISINFORMATION: IS THERE A DIFFERENCE, AND HOW LAW ENFORCEMENT CAN DEAL WITH IT(2022, Public Security and Public Order)
- Combatting Disinformation and Deepfake: Interdisciplinary Insights and Global Strategies(A. Hooda, Mehul Kumar, 2024, TENCON 2024 - 2024 IEEE Region 10 Conference (TENCON))
- AI-generated Images of Ukrainian Soldiers as a Tool for Media Manipulation in the Context of the Russo-Ukrainian War(Maryna Poliakova, I. Savchuk, Ihor Shalinskyi, S. Berdynskykh, I. Yatsyk, 2025, Journal of Cultural Analysis and Social Change)
- Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News(Cristian Vaccari, A. Chadwick, 2020, Social Media + Society)
- Deepfake como estrategia para desinformar en las redes sociales durante las campañas electorales en Ecuador(M. Romero, Analía Elizabeth, Mera Cedeño, Manuel José, Illicachi Guzñay, Rómulo Ramos, Arteño, Resumen, J. Ramos, Rómulo Arteño, 2025, Revista de Ciencias Sociales)
- Generative Propaganda(Madeleine I. G. Daepp, Alejandro Cuevas, Robert Osazuwa Ness, Victoria Wang, Bharat Nayak, Dibyendu Mishra, Ti-Chung Cheng, Shaily Desai, Joyojeet Pal, 2025, ArXiv)
- The Social Harms of AI-Generated Fake News: Addressing Deepfake and AI Political Manipulation(L. Sophia, 2025, Digital Society & Virtual Governance)
- Deepfake as an innovative tool of political manipulation in the early XXI century(Konstantin V. Starostenko, A. S. Konovalov, 2025, Izvestiya of Saratov University. Sociology. Politology)
- From Open-Source to Primetime: The Making of an AI News Anchor and its Role in the New Landscape of Disinformation(Matyáš Boháček, 2024, Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation)
- Deepfake Technology: Emerging Threats and Security Implications(William Easttom, 2025, International Conference on Cyber Warfare and Security)
- Deepfake as an Advanced Manipulative Technique for Spreading Propaganda(M. Havlík, 2023, Vojenské rozhledy)
- Media coverage of DeepFake disinformation: An analysis of three South-Asian countries(Ahmed Shafkat Sunvy, R. Reza, Abdullah Al Imran, 2024, Informasi)
- Slopaganda: The interaction between propaganda and generative AI(Michal Klincewicz, Mark Alfano, A. E. Fard, 2025, ArXiv)
- AI-Generated Misinformation: A Case Study on Emerging Trends in Fact-Checking Practices Across Brazil, Germany, and the United Kingdom(Regina Cazzamatta, Aynur Sarısakaloğlu, 2025, Emerging Media)
- Forensic countering of deepfake disinformation(N. F. Bodrov, A. K. Lebedeva, 2024, Союз криминалистов и криминологов)
- From Disinformation to Manipulation: Tackling Deepfakes Through Law and Technology(Unmesh Mandal, S. Setua, Sayan Sen Sarma, 2025, 2025 Conference on Building a Secure & Empowered Cyberspace (BuildSEC))
- Deepfake Laws in India : A Critical Analysis(Vishakha Periwal, 2025, International Journal For Multidisciplinary Research)
- Legal Implications of the Use of Deepfake in Politics and National Security in Indonesia(Nestia Lianingsih, Alim Jaizul, 2025, International Journal of Humanities, Law, and Politics)
- Deepfake content in political communication(L. Szabo, Simona Bader, 2026, Convergence: The International Journal of Research into New Media Technologies)
- Artificial Intelligence and Political Deepfakes: Shaping Citizen Perceptions Through Misinformation(Mina Momeni, 2024, Journal of Creative Communications)
- Legal Aspects of Using Deepfake in Political Campaigns: A Threat to Democracy?(Mugi Lestari, Riza A Ibrahim, 2025, International Journal of Humanities, Law, and Politics)
- Deepfake Technology: A Innovation and Threat(Dr Pushparani MK, Shrilaxmi Bhat, Shreedhanya B, Pavithra, Maibam Yoihenba Meitei, 2025, International Research Journal on Advanced Engineering Hub (IRJAEH))
- Beyond Credibility: The Effects of Different Forms of Visual Disinformation(Teresa Weikmann, J. Egelhofer, Sophie Lecheler, 2025, Journalism & Mass Communication Quarterly)
- AI Threats to Politics, Elections, and Democracy: A Blockchain-Based Deepfake Authenticity Verification Framework(M. B. E. Islam, Muhammad Haseeb, Hina Batool, Nasir Ahtasham, Zia Muhammad, 2024, Blockchains)
- Misinformation, Disinformation, and Generative AI: Implications for Perception and Policy(Kokil Jaidka, Tsuhan Chen, Simon Chesterman, W. Hsu, Min-Yen Kan, Mohan Kankanhalli, Mong Li Lee, Gyula Seres, Terence Sim, Araz Taeihagh, Anthony K. H. Tung, Xiaokui Xiao, Audrey Yue, 2024, Digital Government: Research and Practice)
- Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign(Morgan Wack, Carl Ehrett, Darren Linvill, Patrick Warren, 2025, PNAS Nexus)
- LLMs Solution to Fake News, Disinformation, and Hoaxes: Llama 3 [70B]-based Hoax Detection and Counteraction System(Adi Jufriansah, Y. Pramudya, Azmi Khusnani, Edwin Ariesto Umbu Malahina, 2025, Journal of Novel Engineering Science and Technology)
- Generative AI and misinformation: a scoping review of the role of generative AI in the generation, detection, mitigation, and impact of misinformation(Seyeon Park, Xiaoli Nan, 2025, AI & SOCIETY)
国家赞助的认知作战与协调性不实行为(CIB)
这些文献侧重于分析由国家行为体(俄、中、伊朗等)发起的、高度组织性的跨平台信息行动。研究涉及僵尸网络(Bots)、“喷子”账号(Trolls)、协调不实行为(CIB)的归因分析,以及这些行动如何通过操纵社交平台议程来干预他国选举和地缘政治。
- Multifaceted online coordinated behavior in the 2020 US presidential election(S. Tardelli, Leonardo Nizzoli, M. Avvenuti, S. Cresci, Maurizio Tesconi, 2024, EPJ Data Science)
- A Consumer Vulnerability Perspective on State-Sponsored Propaganda(Shawn Enriques, Mark Peterson, 2024, Journal of Macromarketing)
- Unraveling the Web of Disinformation: Exploring the Larger Context of State-Sponsored Influence Campaigns on Twitter(Mohammad Hammas Saeed, Shiza Ali, Pujan Paudel, Jeremy Blackburn, Gianluca Stringhini, 2024, Proceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses)
- Behavior-Based Machine Learning Approaches to Identify State-Sponsored Trolls on Twitter(S. Alhazbi, 2020, IEEE Access)
- The web of Big Lies: state-sponsored disinformation in Iran(Shahram Akbarzadeh, Amin Naeni, Galib Bashirov, Ihsan Yilmaz, 2024, Contemporary Politics)
- Who Let The Trolls Out?: Towards Understanding State-Sponsored Trolls(Savvas Zannettou, T. Caulfield, William Setzer, Michael Sirivianos, G. Stringhini, Jeremy Blackburn, 2018, Proceedings of the 10th ACM Conference on Web Science)
- New Naval Strategy, Not Cyberwar: China’s State-Sponsored Maritime Cyber Operations(F. Ferazza, Konsantinos Mersinas, 2026, International Conference on Cyber Warfare and Security)
- Coordinated Behavior on Social Media in 2019 UK General Election(Leonardo Nizzoli, S. Tardelli, M. Avvenuti, S. Cresci, Maurizio Tesconi, 2020, ArXiv)
- Coordinated Activity Modulates the Behavior and Emotions of Organic Users: A Case Study on Tweets about the Gaza Conflict(Priyanka Dey, Luca Luceri, Emilio Ferrara, 2024, Companion Proceedings of the ACM Web Conference 2024)
- Automatic detection of influential actors in disinformation networks(S. Smith, E. Kao, Erika Mackin, Danelle C. Shah, O. Simek, D. Rubin, 2020, Proceedings of the National Academy of Sciences)
- Use of Facebook Accounts with Inauthentic Behavior in Elections: The Romanian Presidential Election Case(Bogdan Oprea, 2024, Romanian Journal of Communication and Public Relations)
- ‘Telling China’s Story Well’ as propaganda campaign slogan: International, domestic and the pandemic(Jian Xu, Q. Gong, 2024, Media, Culture & Society)
- Decentralized Propaganda in the Era of Digital Media: The Massive Presence of the Chinese State on Douyin(Yingda Lu, Jennifer Pan, Xu Xu, Yiqing Xu, 2025, SSRN Electronic Journal)
- Participatory propaganda and the intentional (re)production of disinformation around international conflict(Dmitry Chernobrov, 2025, Critical Studies in Media Communication)
- Mapping state-sponsored information operations with multi-view modularity clustering(Joshua Uyheng, Iain J. Cruickshank, K. Carley, 2022, Epj Data Science)
- Towards Norms for State Responsibilities regarding Online Disinformation and Influence Operations(Brett van Niekerk, Trishana Ramluckan, 2023, European Conference on Cyber Warfare and Security)
- Explaining Russian state-sponsored disinformation campaigns: who is targeted and why?(Brandon Stewart, Shelby Jackson, John Ishiyama, Michael C Marshall, 2024, East European Politics)
- Working Together (to Undermine Democratic Institutions): Challenging the Social Bot Paradigm in SSIO Research(Cole Polychronis, Marina Kogan, 2023, Proceedings of the ACM on Human-Computer Interaction)
- Beyond detection: How Serbia's SNS party mimics authentic support through coordinated inauthentic behaviour(Ana Jovanovic-Harrington, Alessio Cornia, 2026, European Journal of Communication)
- Uncovering coordinated cross-platform information operations: Threatening the integrity of the 2024 U.S. presidential election(Marco Minici, Federico Cinus, Luca Luceri, Emilio Ferrara, 2024, First Monday)
- Did State-sponsored Trolls Shape the US Presidential Election Discourse? Quantifying Influence on Twitter(Nikos Salamanos, Michael J. Jensen, Xinlei He, Yang Chen, Costas Iordanou, Michael Sirivianos, 2020, ArXiv)
- Unmasking Coordination: How Inauthentic Behavior Emerged and Diffusion During the Russia–Ukraine War on Twitter(Yanhong Wu, Jianqiang Yu, 2026, Social Science Computer Review)
- On the Detection of Disinformation Campaign Activity with Network Analysis(Luis Vargas, Patrick Emami, Patrick Traynor, 2020, Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop)
- Analyzing digital propaganda and conflict rhetoric: a study on Russia’s bot-driven campaigns and counter-narratives during the Ukraine crisis(Rebecca Marigliano, L. Ng, K. Carley, 2024, Social Network Analysis and Mining)
- Do Bots Do It Better? Analyzing the Effectiveness of Automated Agents in State-Sponsored Information Operations(Cole Polychronis, Marina Kogan, 2025, No journal)
- Uncovering Coordinated Cross-Platform Information Operations Threatening the Integrity of the 2024 U.S. Presidential Election Online Discussion(Marco Minici, Luca Luceri, Federico Cinus, Emilio Ferrara, 2024, ArXiv)
- Coordinated Behavior in Information Operations on Twitter(Lorenzo Cima, Lorenzo Mannocci, M. Avvenuti, Maurizio Tesconi, S. Cresci, 2024, IEEE Access)
- Disinformation Warfare: Understanding State-Sponsored Trolls on Twitter and Their Influence on the Web(Savvas Zannettou, T. Caulfield, Emiliano De Cristofaro, Michael Sirivianos, G. Stringhini, Jeremy Blackburn, 2018, Companion Proceedings of The 2019 World Wide Web Conference)
- TrollSleuth: Behavioral and Linguistic Fingerprinting of State-Sponsored Trolls(H. A. Noughabi, Fattane Zarrinkalam, Abbas Yazdinejad, Ali Dehghantanha, 2025, 2025 22nd Annual International Conference on Privacy, Security, and Trust (PST))
- Exposing influence campaigns in the age of LLMs: a behavioral-based AI approach to detecting state-sponsored trolls(Fatima Ezzeddine, Luca Luceri, Omran Ayoub, Ihab Sbeity, Gianluca Nogara, Emilio Ferrara, Silvia Giordano, 2022, Epj Data Science)
- Talking to Trolls - How Users Respond to a Coordinated Information Operation and Why They're So Supportive(Darren L. Linvill, Patrick L. Warren, A. Moore, 2021, J. Comput. Mediat. Commun.)
- Coordinated inauthentic behavior: An innovative manipulation tactic to amplify COVID-19 anti-vaccine communication outreach via social media(M. Murero, 2023, Frontiers in Sociology)
- An Analysis of Social Bot Activity on X in Modern Japan(Shuhei Ippa, Takao Okubo, Masaki Hashimoto, 2023, IEEE Access)
- Exposing Cross-Platform Coordinated Inauthentic Activity in the Run-Up to the 2024 U.S. Election(Federico Cinus, Marco Minici, Luca Luceri, Emilio Ferrara, 2024, Proceedings of the ACM on Web Conference 2025)
- Toxicity in State Sponsored Information Operations(Ashfaq Ali Shafin, Khandaker Mamun Ahmed, 2025, Proceedings of the 36th ACM Conference on Hypertext and Social Media)
- SoK: False Information, Bots and Malicious Campaigns: Demystifying Elements of Social Media Manipulations(Mohammad Majid Akhtar, Rahat Masood, M. Ikram, S. Kanhere, 2023, Proceedings of the 19th ACM Asia Conference on Computer and Communications Security)
- Disinformation Echo-Chambers on Facebook(Mathias-Felipe de-Lima-Santos, Wilson Ceron, 2023, ArXiv)
- The Anatomy of Disinformation Networks: A Hybrid Graph-Based Framework for Echo Chamber Detection in the Italian Twitter/X Sphere(D. Pasquini, P. Vocca, Gianni Amati, 2025, 2025 12th International Conference on Social Networks Analysis, Management and Security (SNAMS))
社交媒体动力学、视觉传播与模因政治
该组研究关注非文本和多模态信息的传播机制。包括视觉模因(Memes)、短视频(TikTok)、Instagram视觉框架以及社交媒体算法对道德情绪和极化的放大作用。研究揭示了视觉符号如何通过触发情感动员和注意力捕获来绕过受众的理性防御。
- Visual political communication on Instagram: a comparative study of Brazilian presidential elections(Mathias-Felipe de-Lima-Santos, Isabella Gonçalves, M. Quiles, Lucia Mesquita, Wilson Ceron, Maria Clara Couto Lorena, 2024, EPJ Data Science)
- Pics or it didn’t happen! EU institutions’ visual communication and user engagement on Facebook and Instagram(Olga Eisele, Tobias Heidenreich, Phoebe Maares, 2026, European Journal of Political Research)
- Self-Presentation Through Visual Framing in Political Communication of Candidates in Elections on Social Media(Muhammad Fauzi Fitri Andika, Eriyanto Eriyanto, Kusariani Adinda Saraswati, Tutik Wijayanti, 2025, South Sight: Journal of Media and Society Inquiry)
- Populist Visual Communication: A State-of-the-Art Review(Francesco Melito, Mattia Zulianello, 2024, Political Studies Review)
- Far‐right boundary construction towards the “other”: Visual communication of Danish People’s Party on social media(Sarah Awad, Nicole Doerr, Anita Nissen, 2022, The British Journal of Sociology)
- Women Under Hindutva: Misogynist Memes, Mock-Auction and Doxing, Deepfake-Pornification and Rape Threats in Digital Space(Rishiraj Sen, S. Jha, 2024, Journal of Asian and African Studies)
- Visual aspects of online political communication in Belarus during the political crisis of 2020(K. Zuykina, 2023, RUDN Journal of Studies in Literature and Journalism)
- Visual Political Communication in a Polarized Society: A Longitudinal Study of Brazilian Presidential Elections on Instagram(Mathias-Felipe de-Lima-Santos, Isabella Gonçalves, M. G. Quiles, Lucia Mesquita, Wilson Ceron, 2023, ArXiv)
- Visual Gender Stereotyping in Campaign Communication: Evidence on Female and Male Candidate Imagery in 28 Countries(Marc Jungblut, Mario Haim, 2021, Communication Research)
- Visual Political Communication of Competing Leadership: Italy’s 2024 European Election Campaign on Social Media(E. Novelli, Christian Ruggiero, Marco Solaroli, 2025, Media and Communication)
- Political Propaganda Posters and Their Influence on Public Opinion: Visual Rhetoric and Political Communication(Islam Ghandi Almomani, S. Alkhateeb, 2025, Journal of Cultural Analysis and Social Change)
- Attentional capture helps explain why moral and emotional content go viral.(W. Brady, Ana P. Gantman, Jay J. Van Bavel, 2019, Journal of experimental psychology. General)
- Visual communication has always been political(G. Aiello, 2023, Journal of Visual Political Communication)
- Propaganda to Hate: A Multimodal Analysis of Arabic Memes with Multi-Agent LLMs(Firoj Alam, Md. Rafiul Biswas, Uzair Shah, W. Zaghouani, Georgios Mikros, 2024, ArXiv)
- Visual representations of wealth inequality in political communication(Michael Vaughan, S. Kerr, 2025, Visual Communication)
- Visual Communication Strategy in Parody Content: Lessons from the GUSDURian Network(Muhamad Lutfi Habibi, Aida Husna Rahmadani, Putri Damayanti, 2025, VCD)
- Reifying subaltern voices: a visual communication and figurative discourse of headloading practices in Nigeria(T. Morgan, 2023, Humanities and Social Sciences Communications)
- Computational Visual Analysis in Political Communication(Yilang Peng, Yingdan Lu, 2023, SSRN Electronic Journal)
- Visual political communication research: A literature review from 2012 to 2022(Xénia Farkas, 2023, Journal of Visual Political Communication)
- Six Years of European Visual Climate Activism: A Longitudinal Analysis of Fridays for Future and Extinction Rebellion’s Online Visual Communication(Azzuppardi Costanza, Doerr Nicole, M. Langa, Matteo Magnani, Oross Dániel, Luca Rossi, Alexandra Segerberg, Katrin Uba, Luigi Arminio, 2026, AoIR Selected Papers of Internet Research)
- “Picturing” Xenophobia: Visual Framing of Masks During COVID-19 and Its Implications for Advocacy in Technical Communication(T. Batova, 2021, Journal of Business and Technical Communication)
- Banana Populism: Exploring the Emotionally Engaging, Authentic, and Memeable Rhetoric of Populist Visual Communication(Zea Szebeni, Ilana Hartikainen, Sophie Schmalenberger, Michael Cole, 2025, Social Media + Society)
- Visual Cues to the Hidden Agenda: Investigating the Effects of Ideology-Related Visual Subtle Backdrop Cues in Political Communication(Viorela Dan, F. Arendt, 2020, The International Journal of Press/Politics)
- Visual Culture, Personalization, and Politics: A Comparative Analysis of Political Leaders’ Instagram-Based Image-Making and Communication in Spain and India(C. Navarro, Deepti Ganapathy, Vincent Raynauld, 2023, International Journal of Strategic Communication)
- The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine(Marcus Bösch, Tom Divon, 2024, New Media & Society)
- Video killed the Instagram star: The future of political communication is audio-visual(Franziska Marquart, 2023, Journal of Visual Political Communication)
- Crowds and Smiles: Visual Opportunity Structures and the Communication of European Political Leaders During the COVID-19 Pandemic(Moreno Mancosu, Gaetano Scaduto, 2024, Mass Communication and Society)
- Utilization of Visual Communication ‘Nice Photos’ as Political Communication Media in Increasing Public Participation (Case Study of West Java DPD Candidate Komeng)(Nanda Dwi Rizkia, Euis Komalawati, Pascasarjana Ilmu, Komunikasi Institut, Ilmu Sosial, Manajemen Stiami, Nanda DwiRizkia, Kata Kunci, Komunikasi Politik, Komunikasi Visual, Foto “Nyeleh”, Caleg Dpd, Jawa Barat, 2024, Indonesian Journal of Contemporary Multidisciplinary Research)
- CONSTRUCTING POLITICAL IMAGES THROUGH SOCIAL MEDIA: A VISUAL COMMUNICATION PERSPECTIVE(Anshul Garg, Chanchal Sachdeva Suri, 2026, ShodhKosh: Journal of Visual and Performing Arts)
- Measuring populist style in visual communication:(Xénia Farkas, M. Bene, 2025, Intersections)
- Exploring viewers’ visual attention and emotional responses to populist communication: A laboratory study of the Finns Party leader’s strategies on Instagram and TikTok(Jenny Lindholm, Jesper Eklund, Tom Carlson, Kim Strandberg, Joachim Högväg, 2026, Journal of Visual Political Communication)
- Jihadist visual communication strategy: ISIL’s hostage executions video production(Alexandra Herfroy-Mischler, A. Barr, 2018, Visual Communication)
- Visual warfare and strategic communication: Case studies from Ukraine, Israel, and India(Kumar Panda Jayanta, 2025, Journal of Media and Communication Studies)
- Viral Justice: TikTok Activism, Misinformation, and the Fight for Social Change in Southeast Asia(Nuurrianti Jalli, 2025, Social Media + Society)
- Estimating the effect size of moral contagion in online networks: A pre-registered replication and meta-analysis(William J. Brady, Steve Rathje, Laura K. Globig, Jay J. Van Bavel, 2025, PNAS Nexus)
- Diffusion of disinformation: How social media users respond to fake news and why(Edson C. Tandoc, D. Lim, Rich Ling, 2020, Journalism)
- Fighting False Information from Propagation Process: A Survey(Ling Sun, Y. Rao, Lianwei Wu, Xiangbo Zhang, Yuqian Lan, Ambreen Nazir, 2022, ACM Computing Surveys)
- Topology comparison of Twitter diffusion networks reliably reveals disinformation news(Francesco Pierri, C. Piccardi, S. Ceri, 2019, ArXiv)
受众心理认知偏见与媒介素养教育
这类文献探讨人们为何相信并分享虚假信息的内在机理。研究涵盖了双系统思考模型(直觉vs理性)、身份认同、信念偏误、政治立场一致性等因素。同时,评估了接种理论(Inoculation)、事实核查、纠偏机制以及针对老年人等易感人群的媒介素养培训效果。
- Social Media Misinformation and Voting Intentions: Older Adults' Experiences with Manipulative Narratives(Filipo Sharevski, Jennifer Vander Loop, Sanchari Das, 2025, Proceedings of the ACM on Human-Computer Interaction)
- Overcoming the Age Barrier: Improving Older Adults’ Detection of Political Disinformation With Media Literacy(Charo Sádaba, Ramón Salaverría, X. Bringué, 2023, Media and Communication)
- Updating the identity-based model of belief: From false belief to the spread of misinformation.(Jay J. Van Bavel, Steve Rathje, Madalina Vlasceanu, Clara Pretus, 2024, Current opinion in psychology)
- Rethinking education and training to counter AI-enhanced disinformation and information manipulations in Europe: a Delphi study(Cristina M. Arribas, Rubén Arcos, Manuel Gértrudix, 2025, Cogent Social Sciences)
- Synthetic disinformation detection among German information elites – Strategies in politics, administration, journalism, and business(Nils Vief, Marcus Bösch, Saïd Unger, Johanna Klapproth, Svenja Boberg, Thorsten Quandt, Christian Stöcker, 2025, Studies in Communication and Media)
- Building Resilience Against Hostile Information Influence Activities: How a New Media Literacy Learning Platform Was Developed for the Estonian Defense Forces(A. Ventsel, Sten Hansson, Merit Rickberg, Mari-Liis Madisson, 2023, Armed Forces & Society)
- Prominent misinformation interventions reduce misperceptions but increase scepticism(Emma Hoes, Brian Aitken, Jingwen Zhang, Tomasz Gackowski, Magdalena Wojcieszak, 2024, Nature Human Behaviour)
- How Do Individual and Societal Factors Shape News Authentication? Comparing Misinformation Resilience Across Hong Kong, the Netherlands, and the United States(Qinfeng Zhu, Tai-Quan Peng, Xinzhi Zhang, 2025, The International Journal of Press/Politics)
- Investigating the use of belief-bias to measure acceptance of false information(Robert Thomson, William Frangia, 2024, Computational and Mathematical Organization Theory)
- What causes reticence in publicly correcting false information online? A case study from the Philippines(Jessica Asprer, Eleanor Marie Escalante, Jeremaiah M. Opiniano, 2025, Romanian Journal of Communication and Public Relations)
- A Warning from Above: How Authoritarian Anti-Protest Propaganda Works(Minh Trinh, Mai Truong, 2025, World Politics)
- Correcting False Information: Journalistic Coverage During the 2016 and 2020 US Elections(Clara Juarez Miro, Jonathan Anderson, 2023, Journalism Studies)
- Understanding and combatting misinformation across 16 countries on six continents(A. Arechar, Jennifer Allen, Adam J. Berinsky, R. Cole, Ziv Epstein, Kiran Garimella, Andrew Gully, Jackson G. Lu, R. Ross, M. Stagnaro, Yunhao Zhang, Gordon Pennycook, David G. Rand, 2023, Nature Human Behaviour)
- Think Fast, Think Slow, Think Critical: Designing an Automated Propaganda Detection Tool(Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones, Gerhard Schwabe, 2024, Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems)
- Who Falls for Misinformation and Why?(Tyler J Hubeny, Lea S. Nahon, Nyx L. Ng, Bertram Gawronski, 2025, Personality & social psychology bulletin)
- Believing the Untrue: How Social Media, Sexism, and Structural Gender Inequality Influence Misinformation About Women Politicians(Saifuddin Ahmed, Muhammad Masood, Adeline Bee Wei Ting, 2025, Journalism & Mass Communication Quarterly)
- “It’s All Fake News!”: How Perceptions of Misinformation and Disinformation Influence News Consumption Across Traditional Media, Social Media, and AI(M. E. Rasul, Christopher Calabrese, Yoo Jung Oh, Hee Jung Cho, Moonsun Jeon, M. Boukes, 2025, Journalism & Mass Communication Quarterly)
- 4 Reasons Why Social Media Make Us Vulnerable to Manipulation(F. Menczer, 2020, Proceedings of the 14th ACM Conference on Recommender Systems)
- From Podcasts to Protests: Examining the Influence of Podcasts and Misinformation on Contentious Political Participation(M. E. Rasul, Saifuddin Ahmed, Jaeho Cho, T. Gil-López, 2025, Journal of Broadcasting & Electronic Media)
- Why misinformation must not be ignored.(Ullrich K. H. Ecker, L. Tay, J. Roozenbeek, S. van der Linden, J. Cook, Naomi Oreskes, Stephan Lewandowsky, 2024, The American psychologist)
- Misinformation poses a bigger threat to democracy than you might think(Ullrich K. H. Ecker, J. Roozenbeek, Sander van der Linden, L. Tay, John Cook, Naomi Oreskes, Stephan Lewandowsky, 2024, Nature)
- Doctors for the Truth: Echo Chambers of Disinformation, Hate Speech, and Authority Bias on Social Media(Joana Milhazes-Cunha, Luciana Oliveira, 2023, Societies)
综合治理框架、事实核查与社会韧性构建
该组文献关注防御和应对机制的宏观视角。探讨了博弈论下的控制模型、区块链溯源、众包事实核查(如Twitter Community Notes)、国家安全策略以及在不干预言论自由前提下的法律监管与平台问责机制。
- Preventing online disinformation propagation: Cost-effective dynamic budget allocation of refutation, media censorship, and social bot detection.(Yi Wang, Shicheng Zhong, Guo Wang, 2023, Mathematical biosciences and engineering : MBE)
- Optimal Control of False Information Clarification System under Major Emergencies Based on Differential Game Theory(Bowen Li, Hua Li, Qiubai Sun, Rongjian Lv, 2022, Computational Intelligence and Neuroscience)
- Fake News, Disinformation, and Deepfakes: Leveraging Distributed Ledger Technologies and Blockchain to Combat Digital Deception and Counterfeit Reality(Paula Fraga-Lamas, T. Fernández-Caramés, 2019, IT Professional)
- Policy Review: Countering Disinformation in the Digital Age - Policies and Initiatives to Safeguard Democracy in Europe(Alessia D’Andrea, Giorgia Fusacchia, Arianna D’Ulizia, 2025, Information Polity)
- Reversing the Privatisation of the Public Sphere: Democratic Alternatives to the EU’s Regulation of Disinformation(Á. Oleart, J. Rone, 2025, Media and Communication)
- Countering AI-powered disinformation through national regulation: learning from the case of Ukraine(Anatolii Marushchak, S. Petrov, Anayit Khoperiya, 2025, Frontiers in Artificial Intelligence)
- Artificial Intelligence Regulation in the Protection of Democracy: A Legal Analysis of Political Deepfakes and Disinformation in the 2024 Election(Andi Nur Azizah Ardan Paliwang, Ni Luh Putu Erika Swandiani, 2025, Hakim: Jurnal Ilmu Hukum dan Sosial)
- A Study on the Risks and Countermeasures of False Information Caused by AIGC(Taoye Wang, Li Li, Xiang Chen, Kun Li, 2024, Journal of Electrical Systems)
- Disinformation Dynamics and Regulation in Portugal: Insights from a Qualitative Study(Inês Sousa Guedes, Beatriz Vigo, Margarida A. Santos, S. Moreira, Carla Sofia Cardoso, J. Castro, 2025, European Journal on Criminal Policy and Research)
- JOINT EFFORTS OF THE MEDIA, CIVIL SOCIETY AND THE STATE TO COUNTER RUSSIAN DISINFORMATION(G. Piskorska, D. Ryzhova, Anatoly Yakovets, 2023, International Journal of Innovative Technologies in Social Science)
- Digital literacy and technopolitics, core enablers in a disintermediated digital political communication age(Ana Pérez-Escoda, M. Freire, 2023, El Profesional de la información)
- Fighting misinformation on social media using crowdsourced judgments of news source quality(Gordon Pennycook, David G. Rand, 2019, Proceedings of the National Academy of Sciences of the United States of America)
- Beyond online disinformation: assessing national information resilience in four European countries(Marius Dragomir, J. Rúas-Araújo, M. Horowitz, 2024, Humanities and Social Sciences Communications)
- Enhancing Self-Perceived Disinformation Identification in Democracy: The Impact of Fact-Checking Integration into Daily News Consumption Practices(M. Goyanes, Sangwon Lee, Susana Salgado, Homero Gil de Zúñiga, 2025, Journal of Broadcasting & Electronic Media)
- Can Crowdchecking Curb Misinformation? Evidence from Community Notes(Yang Gao, M. Zhang, Huaxia Rui, 2025, Information Systems Research)
- Effective Yet Ephemeral Propaganda Defense: There Needs to Be More than One-Shot Inoculation to Enhance Critical Thinking(Nicolas Hoferer, Kilian Sprenkamp, Dorian Quelle, Daniel Gordon Jones, Zoya Katashinskaya, Alexandre Bovet, Liudmila Zavolokina, 2025, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems)
- Leveraging Blockchain for Disinformation Mitigation: A Comprehensive Approach to Enhancing Content Authenticity in Social Media(Alparslan Sari, Sena B Ceylan, Omer K Vural, Adnan Ozsoy, 2025, 2025 IEEE International Conference on Consumer Electronics (ICCE))
- Resilience to Online Disinformation: A Framework for Cross-National Comparative Research(Edda Humprecht, F. Esser, Peter van Aelst, 2020, The International Journal of Press/Politics)
- Research on false information clarification mechanism among government, opinion leaders, and Internet users — Based on differential game theory(Bowen Li, Hua Li, Qiubai Sun, Rongjian Lv, 2022, Frontiers in Psychology)
- Impulse Strategies for Suppressing Cyber Propaganda With Awareness(Xiaojuan Cheng, Lu-Xing Yang, Qingyi Zhu, Chenquan Gan, Gang Li, 2025, IEEE Transactions on Computational Social Systems)
- Sustaining Exposure to Fact-Checks: Misinformation Discernment, Media Consumption, and Its Political Implications(Jeremy Bowles, K. Croke, Horacio Larreguy, Shelley Liu, J. Marshall, 2025, American Political Science Review)
- Social media trust: Fighting misinformation in the time of crisis(M. Shahbazi, Deborah Bunker, 2024, Int. J. Inf. Manag.)
- Confucian free expression and the threat of disinformation(D. Elstein, 2022, Philosophy & Social Criticism)
特定领域实证:医疗、气候与区域性政治危机案例
这些文献针对虚假信息在特定社会敏感领域的表现进行实证研究。重点涵盖了COVID-19信息流、气候变化否定论、特定国家(如罗马尼亚、巴西、尼日利亚、巴基斯坦等)的选举冲突以及俄乌战争中的信息对抗。
- Public health disinformation, conflict, and disease outbreaks: a global narrative integrative review to guide new directions for health diplomacy(Laura E. R. Peters, G. E. Charnley, Stephen Roberts, Ilan Kelman, 2025, Global Health Action)
- COVID vaccine misinformation: Toward an integrated approach for predicting the cascade of disinformation(Dylan Swinford, Amir Zadeh, 2025, Information Services and Use)
- Climate Change Disinformation on Social Media: A Meta-Synthesis on Epistemic Welfare in the Post-Truth Era(E. Essien, 2025, Social Sciences)
- Unravelling the infodemic: a systematic review of misinformation dynamics during the COVID-19 pandemic(Sudip Bhattacharya, Alok Singh, 2025, Frontiers in Communication)
- Information Pandemic: A Critical Review of Disinformation Spread on Social Media and Its Implications for State Resilience(Dwi Surjatmodjo, A. Unde, Hafied Cangara, Alem Febri Sonni, 2024, Social Sciences)
- The COVID-19 Social Media Infodemic Reflects Uncertainty and State-Sponsored Propaganda(David A. Broniatowski, Daniel Kerchner, Fouzia Farooq, Xiaolei Huang, Amelia M. Jamison, M. Dredze, S. Quinn, 2020, ArXiv)
- Climate and energy misinformation in Taiwan(J. C. Liu, Chia-Fen Lee, 2025, Frontiers in Communication)
- RUSSIAN DISINFORMATION CAMPAIGN IN ROMANIA: QUO VADIS, KREMLIN?(I. Maksymenko, 2023, International and Political Studies)
- Manipulation During the French Presidential Campaign: Coordinated Inauthentic Behaviors and Astroturfing Analysis on Text and Images(Victor Chomel, Maziyar Panahi, David Chavalarias, 2022, No journal)
- #EndSARS and the Digital Public Sphere: Investigating the Intersection of User-Generated Content and Social Media Disinformation(T. Francis, 2025, Media i Społeczeństwo)
- Propaganda and Ideology in the Russian–Ukrainian War(Edith Manalachioaei, 2025, Europe-Asia Studies)
- How Disinformation Can Influence a Nation: The Case of Romania(Stefano Lovi, 2025, Studia i Analizy Nauk o Polityce)
- How False Information Is Used Against Sexual and Gender Minorities and What We Can Do About It(S. Shuster, Grant Bunn, Kenneth Joseph, Celeste Campos-Castillo, 2025, Sex & Sexualities)
- Digital disinformation and democratic destabilization: mechanisms, risks and prospects for counteracting it(Stefano Lovi, 2025, Zoon Politikon)
- Minimal Effects, Maximum Panic: Social Media and Democracy in Latin America(Eugenia Mitchelstein, Mora Matassi, P. Boczkowski, 2020, Social Media + Society)
- How Disinformation on WhatsApp Went From Campaign Weapon to Governmental Propaganda in Brazil(J. V. S. Ozawa, S. Woolley, Joseph D. Straubhaar, M. J. Riedl, Katie Joseff, Jacob Gursky, 2023, Social Media + Society)
- Information Disorder in the Chilean Constitutional Process: When Disinformation Originates with the Political Authorities Themselves(John Charney, Laura Mayer, Pedro Santander, 2025, European Journal on Criminal Policy and Research)
- A State-of-the-art of Scientific Research on Disinformation(Gazmend Huskaj, Stefan Axelsson, 2023, European Conference on Cyber Warfare and Security)
- Systematic Review of Fake News, Propaganda, and Disinformation: Examining Authors, Content, and Social Impact Through Machine Learning(D. Plikynas, Ieva Rizgelienė, Gražina Korvel, 2025, IEEE Access)
- Misinformation, disinformation, and fake news: lessons from an interdisciplinary, systematic literature review(Elena Broda, Jesper Strömbäck, 2024, Annals of the International Communication Association)
最终分组结果构建了一个从技术层面(NLP检测、生成式AI分析)到行为层面(国家行为体、协调性不实行为)再到社会心理与治理层面(认知偏见、法律规制、社会韧性)的完整知识体系。特别强调了视觉政治传播与特定领域实证(如医疗、气候)的独特性。该分类方案有效整合了多语言、多模态的研究趋势,并涵盖了从基础理论到宏观政策的虚假信息治理闭环。
总计356篇相关文献
Abstract Can AI bolster state-backed propaganda campaigns, in practice? Growing use of AI and large language models has drawn attention to the potential for accompanying tools to be used by malevolent actors. Though recent laboratory and experimental evidence has substantiated these concerns in principle, the usefulness of AI tools in the production of propaganda campaigns has remained difficult to ascertain. Drawing on the adoption of generative-AI techniques by a state-affiliated propaganda site with ties to Russia, we test whether AI adoption enabled the website to amplify and enhance its production of disinformation. First, we find that the use of generative-AI tools facilitated the outlet’s generation of larger quantities of disinformation. Second, we find that use of generative-AI coincided with shifts in the volume and breadth of published content. Finally, drawing on a survey experiment comparing perceptions of articles produced prior to and following the adoption of AI tools, we show that the AI-assisted articles maintained their persuasiveness in the postadoption period. Our results illustrate how generative-AI tools have already begun to alter the size and scope of state-backed propaganda campaigns.
This article addresses the critical issue of societal resilience in the face of disinformation, particularly in highly digitized democratic societies. Recognizing the escalating impact of disinformation as a significant threat to societal security, the study conducts a scoping review of the literature from 2018 to 2022 to explore the current understanding and approaches to countering this challenge. The core contribution of the article is the development of a preliminary typological framework that addresses key elements and issue areas relevant to societal resilience to disinformation. This framework spans multiple dimensions, including legal/regulatory, educational, political/governance, psychological/social-psychological, and technological domains. By synthesizing existing knowledge and filling identified gaps, the framework aims to serve as a foundational tool for empirical analyses and the enhancement of resilience strategies. One of the innovative aspects of the proposed framework is its potential to be transformed into a computable and customizable tool. This tool would measure the maturity level of various countermeasures against disinformation, thereby providing a practical methodology for planning and implementing effective democratic responses to disinformation. The article emphasizes the importance of this framework as both a conceptual and practical guide. It offers valuable insights for a wide range of civil society actors, including policymakers, educators, and technologists, in their efforts to protect information integrity and bolster societal resilience. By laying the groundwork for a more comprehensive understanding of societal resilience to disinformation, the article contributes to the broader discourse on information protection and provides actionable guidance for addressing the evolving challenges posed by disinformation in democratic societies.
Artificial Intelligence (AI) now enables the mass creation of what have become known as “deepfakes”: synthetic videos that closely resemble real videos. Integrating theories about the power of visual communication and the role played by uncertainty in undermining trust in public discourse, we explain the likely contribution of deepfakes to online disinformation. Administering novel experimental treatments to a large representative sample of the United Kingdom population allowed us to compare people’s evaluations of deepfakes. We find that people are more likely to feel uncertain than to be misled by deepfakes, but this resulting uncertainty, in turn, reduces trust in news on social media. We conclude that deepfakes may contribute toward generalized indeterminacy and cynicism, further intensifying recent challenges to online civic culture in democratic societies.
TikTok has emerged as a powerful platform for the dissemination of mis- and disinformation about the war in Ukraine. During the initial three months after the Russian invasion in February 2022, videos under the hashtag #Ukraine garnered 36.9 billion views, with individual videos scaling up to 88 million views. Beyond the traditional methods of spreading misleading information through images and text, the medium of sound has emerged as a novel, platform-specific audiovisual technique. Our analysis distinguishes various war-related sounds utilized by both Ukraine and Russia and classifies them into a mis- and disinformation typology. We use computational propaganda features—automation, scalability, and anonymity—to explore how TikTok’s auditory practices are exploited to exacerbate information disorders in the context of ongoing war events. These practices include reusing sounds for coordinated campaigns, creating audio meme templates for rapid amplification and distribution, and deleting the original sounds to conceal the orchestrators’ identities. We conclude that TikTok’s recommendation system (the “for you” page) acts as a sound space where exposure is strategically navigated through users’ intervention, enabling semi-automated “soft” propaganda to thrive by leveraging its audio features.
Climate change disinformation has emerged as a substantial issue in the internet age, affecting public perceptions, policy response, and climate actions. This study, grounded on the theoretical frameworks of social epistemology, Habermas’s theory of communicative action, post-truth, and Foucault’s theory of power-knowledge, examines the effect of digital infrastructures, ideological forces, and epistemic power dynamics on climate change disinformation. The meta-synthesis approach in the study reveals the mechanics of climate change disinformation on social media, the erosion of epistemic welfare influenced by post-truth dynamics, and the ideological and algorithmic amplification of disinformation, shedding light on climate change misinformation as well. The findings show that climate change disinformation represents not only a collection of false claims but also a broader epistemic issue sustained by digital environments, power structures, and fossil corporations. Right-wing populist movements, corporate interests, and algorithmic recommendation systems substantially enhance climate skepticism, intensifying political differences and public distrust in scientific authority. The study highlights the necessity of addressing climate change disinformation through improved scientific communication, algorithmic openness, and digital literacy initiatives. Resolving this conundrum requires systemic activities that go beyond fact-checking, emphasizing epistemic justice and legal reforms.
The project envisages the creation of a complex system that integrates advanced technologies of machine learning and natural language processing for media content analysis. The main goal is to provide means for quick and accurate verification of information, reduce the impact of disinformation campaigns and increase media literacy of the population. Research tasks included the development of algorithms for the analysis of textual information, the creation of a database of fakes, and the development of an interface for convenient access to analytical tools. The object of the study was the process of spreading information in the media space, and the subject was methods and means for identifying disinformation. The scientific novelty of the project consists of the development of algorithms adapted to the peculiarities of the Ukrainian language, which allows for more effective work with local content and ensures higher accuracy in identifying fake news. Also, the significance of the project is enhanced by its practical value, as the developed tools can be used by government structures, media organizations, educational institutions and the public to increase the level of information security. Thus, the development of this project is of great importance for increasing Ukraine's resilience to information threats and forming an open, transparent information society.
The emergence of social media companies, and the spread of disinformation as a result of their “surveillance capitalist” business model, has opened wide political and regulatory debates across the globe. The EU has often positioned itself as a normative leader and standard-setter, and has increasingly attempted to assert its sovereignty in relation to social media platforms. In the first part of this article, we argue that the EU has achieved neither sovereignty nor normative leadership: Existing regulations on disinformation in fact have missed the mark since they fail to challenge social media companies’ business models and address the underlying causes of disinformation. This has been the result of the EU increasingly “outsourcing” regulation of disinformation to corporate platforms. If disinformation is not simply a “bug” in the system, but a feature of profit-driven platforms, public–private cooperation emerges as part of the problem rather than a solution. In the second part, we outline a set of priorities to imagine alternatives to current social media monopolies and discuss what could be the EU’s role in fostering them. We argue that alternatives ought to be built decolonially and across the stack, and that the democratisation of technology cannot operate in isolation from a wider socialist political transformation of the EU and beyond.
Advances in the use of AI have led to the emergence of a greater variety of forms disinformation can take and channels for its proliferation. In this context, the future of legal mechanisms to address AI-powered disinformation remains to be determined. Additional complexity for legislators working in the field arises from the need to harmonize national legal frameworks of democratic states with the need for regulation of potentially dangerous digital content. In this paper, we review and analyze some of the recent discussions concerning the use of legal regulation in addressing AI-powered disinformation and present the national case of Ukraine as an example of developments in the field. We develop the discussion through an analysis of the existing counter-disinformation ecosystems, the EU and US legislation, and the emerging regulations of AI systems. We show how the Ukrainian Law on Counter Disinformation, developed as an emergency response to internationally recognized Russian military aggression and hybrid warfare tactics, underscores the crucial need to align even emergency measures with international law and principles of free speech. Exemplifying the Ukrainian case, we argue that the effective actions necessary for countering AI-powered disinformation are prevention, detection, and implementation of a set of response actions. The latter are identified and listed in this review. The paper argues that there is still a need for scaling legal mechanisms that might enhance top-level challenges in countering AI-powered disinformation.
Electoral campaigns are one of the key moments of democracy. In recent times, the circulation of disinformation has increased during these periods. This phenomenon has serious consequences for democratic health since it can alter the behaviour and decisions of voters. This research aims to analyse the features of this phenomenon during the 2024 European Parliament elections in a comparative way. The applied methodology is based on quantitative content analysis. The sample (N = 278) comprises false information verified by 52 European fact-checking agencies about the campaign for the European elections in 20 EU countries. The analysis model includes variables such as time-period, country, propagator platform, topic, and the type of disinformation. The results show that the life cycle of electoral disinformation goes beyond the closing of the polls assuming a permanent nature. In addition, national environments condition the profiles of this question, which is more intense in Southern and Eastern Europe. Furthermore, although multiple channels are involved, digital platforms with weak ties are predominant in disseminating hoaxes. Finally, migration and electoral integrity are the predominant topics. This favours the circulation of an issue central to the far-right agenda and aims to discredit elections and their mechanisms to undermine democracy. These findings establish the profiles of this problem and generate knowledge to design public policies that combat electoral false content more effectively.
As social media is a key conduit for the distribution of disinformation, much of the literature on disinformation in elections has been focused on the internet and global social media platforms. Literature on societal and media trust has also grown in recent years. Yet, disinformation is not limited to global platforms or the internet, traditional media outlets in many European countries act as vehicles of disinformation often under the direction of the government. Moreover, the connection between trust and resilience to disinformation has been less discussed. This article is aimed at tackling the question of what makes a country vulnerable to or resilient against online disinformation. It argues that a society’s information resilience can be viewed as a combination of structural characteristics, features of its knowledge-distribution institutions including its media system, and the activities and capabilities of its citizens. The article makes this argument by describing these dimensions in four European case countries, based on comparable statistics and document analyses. The results indicate that European-wide strategies do not uniformly strengthen national resilience against disinformation and that anti-disinformation strategies need to be anchored in targeted assessments of the state of information resilience at the national level to be more effective. Such assessments are central, particularly to understanding citizens’ information needs in key democratic events such as elections.
The emergence of generative artificial intelligence (GenAI) has exacerbated the challenges of misinformation, disinformation, and mal-information (MDM) within digital ecosystems. These multi-faceted challenges demand a re-evaluation of the digital information lifecycle and a deep understanding of its social impact. An interdisciplinary strategy integrating insights from technology, social sciences, and policy analysis is crucial to address these issues effectively. This article introduces a three-tiered framework to scrutinize the lifecycle of GenAI-driven content from creation to consumption, emphasizing the consumer perspective. We examine the dynamics of consumer behavior that drive interactions with MDM, pinpoints vulnerabilities in the information dissemination process, and advocates for adaptive, evidence-based policies. Our interdisciplinary methodology aims to bolster information integrity and fortify public trust, equipping digital societies to manage the complexities of GenAI and proactively address the evolving challenges of digital misinformation. We conclude by discussing how GenAI can be leveraged to combat MDM, thereby creating a reflective cycle of technological advancement and mitigation.
No abstract available
This article unpacks the politics of disinformation attribution as deterrence. Research and policy on disinformation deterrence commonly draw on frameworks inspired by cyber deterrence to address the ‘attribution problem’, thereby overlooking the political aspects underpinning attribution strategies in liberal democracies. Addressing this gap and bringing together disinformation studies and the fourth wave of deterrence theory, the article examines how acts of attribution serve liberal states' attempts at deterring foreign influence operations. In liberal states, disinformation as an external threat intersects with essential processes of public deliberation, and acts of attribution are charged with political risk. Introducing the concept of the ‘uncertainty loop’, the article demonstrates how the flow of uncertainty charges the decision-making situation in disinformation attribution. Drawing on three contemporary empirical cases—interference in the US presidential election of 2016, the Bundestag election in Germany in 2021 and the EU response to the COVID-19 ‘infodemic’ which erupted in 2020, the article illustrates how diverse strategies of attribution, non-attribution and diffused attribution have been navigated by governments. By laying bare the politics of disinformation attribution and advancing a conceptual apparatus for understanding its variations, the article expands current knowledge on disinformation deterrence and speaks to a broader International Relations literature on how deterrence strategies are mediated through political contexts.
Purpose Using a multidisciplinary approach, this study aims to trace the path of disinformation campaigns from their detection by linguistic cues of credibility to furtherance through the dissemination mechanisms, and lastly, assessing their impact on the socio-political context. Design/methodology/approach This study provides an in-depth overview of four fundamental aspects of disinformation: the linguistic features that distinguish content designed to deceive and manipulate public opinion, the media mechanisms that facilitate its dissemination by exploiting the cognitive processes of its audience, the threats posed by the increasing use of generative artificial intelligence to spread disinformation and the broader consequences these disinformation dynamics have on public opinion and, consequently, on political decision-making processes. Findings As a result, the paper provides an interdisciplinary and holistic examination of the phenomenon, referring to its pluralized elements to highlight the importance of platform responsibility, media literacy campaigns among citizens and interactive cooperation between private and public sectors as measures to enhance resilience against the threat of disinformation. Originality/value The study highlights the need to increase platform accountability, promote media literacy among individuals and develop cooperation between the public and private sectors. Strengthening resilience to disinformation and ensuring the EU’s adaptability in the face of changing digital threats are the goals of this integrated strategy. Ultimately, the paper advocates a fair and open strategy that protects freedom of expression and strengthens democratic institutions at a time when digital disinformation is on the rise.
This study examines journalists’ perceptions of the impact of artificial intelligence (AI) on disinformation, a growing concern in journalism due to the rapid expansion of generative AI and its influence on news production and media organizations. Using a quantitative approach, a structured survey was administered to 504 journalists in the Basque Country, identified through official media directories and with the support of the Basque Association of Journalists. This survey, conducted online and via telephone between May and June 2024, included questions on sociodemographic and professional variables, as well as attitudes toward AI’s impact on journalism. The results indicate that a large majority of journalists (89.88%) believe AI will considerably or significantly increase the risks of disinformation, and this perception is consistent across genders and media types, but more pronounced among those with greater professional experience. Statistical analyses reveal a significant association between years of experience and perceived risk, and between AI use and risk perception. The main risks identified are the difficulty in detecting false content and deepfakes, and the risk of obtaining inaccurate or erroneous data. Co-occurrence analysis shows that these risks are often perceived as interconnected. These findings highlight the complex and multifaceted concerns of journalists regarding AI’s role in the information ecosystem.
No abstract available
In an era where disinformation spreads at unprecedented speeds, strategic communication emerges as a critical tool in countering false narratives and building resilience within organizations. This study delves into the effectiveness of communication strategies in combating disinformation and enhancing organizational resilience. Using case studies and content analysis of five multinational corporations and five political organizations, the study is guided by the Situation Crisis Communication Theory (SCCT) and Resilience Theory (RT). The findings uncovered varying effectiveness in communication strategies, including immediate clarifications, social media engagement, and collaboration with fact-checkers. Further findings revealed that multinational corporations demonstrated success in proactive measures such as real-time monitoring and transparency, resulting in reduced disinformation spread by up to 40%. In contrast, political organizations while facing unique challenges due to polarized environments achieved notable gains through grassroots engagement and rapid responses. Further findings revealed the critical role of trust-building and tailored communication strategies in fostering stakeholder confidence. The study proposes a six-pronged framework integrating proactive monitoring, transparent communication, and stakeholder collaboration to mitigate disinformation and enhance resilience. These findings contribute theoretical insights and practical strategies to the fields of crisis communication and organizational studies.
This study explored the significant impact of disinformation spread through social media, focusing on the documentary "The Social Dilemma" by Jeff Orlowski. The film provided a critical lens to examine how social media algorithms amplified false narratives. Using a qualitative content analysis approach, the research highlighted key themes related to disinformation, such as political polarization in the U.S., the flat earth theory, the #Pizzagate conspiracy, Covid-19 misinformation, and the incitement of hate speech in Myanmar. The findings revealed that algorithms designed to maximize user engagement often prioritized sensational and misleading content, exacerbating the spread of false information. This fueled social tensions and undermined public health and democratic processes. The study emphasized the urgent need for increased public awareness of disinformation's effects and called on social media platforms to take responsibility for reducing its spread.
No abstract available
ABSTRACT This essay explores the changing role of the public in persuasion. I focus on participatory propaganda—which I define as the involvement of publics in the (re)production of persuasive, manipulative, or false content through social networks. Specifically, I draw attention to two underexplored areas: participatory propaganda in international rather than domestic politics, and motivations for publics to knowingly, rather than unwittingly, share propagandistic content. The discussion is illustrated with brief insights from a large-scale study of online narrative battles between the Armenian and Azerbaijani diasporas during the 2020 Karabakh war.
ABSTRACT Concerns about the risk of disinformation in the 2024 European Parliament elections were widespread in European capitals. Simultaneously, the far-right – often linked to the spread of disinformation – gained significant vote shares in these elections. This article therefore asks: To what extent and how does far-right election performance impact disinformation concerns of European voters in the aftermath of the elections? We posit that the election context moderates the extent to which motivated reasoning-related factors – i.e., election winning and not holding populist attitudes, explain variation in voters’ disinformation concerns and analyse survey data from the 2024 European Election Study conducted in all 27 EU countries to test this. Our results suggest that the electoral context may not only structure the magnitude of disinformation concerns directly, but the heightened presence of far-right parties in the electoral arena adds insecurity, in such way that winners and non-populist voters are no longer less concerned about disinformation. Thereby, our findings have important implications for disinformation research and European democracy.
No abstract available
The aim of this article is to answer the following question: Are there any particular areas that shall be taken into consideration when discussing the problem of cognitive warfare. The author presents the Countering Disinformation Concept, which indicated particular areas that may serve as a potential direction for building and developing social resilience in times of cognitive warfare.The author analyzed researches that prove a low social awareness of disinformation and point the possible sources of false content. The author revised professional literature to examine what is the current state of practical solutions in the researched field. The conclusion from the analysis was a basis for proposing the Countering Disinformation Concept. The author uses also a case study of Russian hostile informative influence as evidence for destructive actions of global actors and possible harmful influence of information.The result of the conducted research led to the conclusion that there is a lack of holistic, practical solutions in the field of building social resilience against disinformation. The proposed Countering Disinformation Concept is a comprehensive approach that shall be considered to build social resilience against hostile information operations in times of cognitive warfare.Societies are not aware of hostile information influence that some actors strive to have. The awareness of disinformation processes is low as well as the level of practical solutions implemented in the information sphere. There is a serious need to build and develop social resilience against disinformation especially in the times of cognitive warfare spread by hostile global players.
This study investigates the role of deepfake and open-source intelligence (OSINT) in enabling disinformation campaigns and their societal consequences. Using the Deepfake Detection Challenge (DFDC) dataset for technical evaluation, social media datasets for OSINT network and sentiment analysis, and public opinion data from the Global Disinformation Index, the study applied machine learning classification, network analysis, sentiment analysis, and interrupted time series (ITS) analysis. The technical assessment achieved a detection accuracy of 0.73, precision of 0.75, and recall of 0.70, identifying areas for enhancement in identifying synthetic media. OSINT analysis revealed pivotal amplifiers of disinformation, with User1 having a degree centrality of 0.263 and betweensess centrality of 0.135. Sentiment analysis showed an average sentiment score of -0.085, while ITS analysis documented a significant 9.76-point decline in public trust post-disinformation events. Recommendations include developing adaptive AI detection systems, implementing global regulatory measures, fostering public media literacy, and encouraging ethical OSINT practices. Keywords: Deepfakes, Artificial Intelligence, Disinformation Campaigns, Open-Source Intelligence, Public Trust.
The methodological basis of the study is a set of techniques, principles, general theoretical, special, interdisciplinary methods of scientific research. To achieve the set goal, the dialectical method of scientific knowledge was used – to study disinformation in martial law and determine the role of artificial intelligence (AI) in its detection and neutralization. The use of a systemic approach made it possible to determine the features of the spread of disinformation through social networks, traditional media and automated bot farms, for the manipulation of public opinion. The operations research method was used to determine the advantages and disadvantages of AI tools aimed at detecting disinformation. Methods of analogies and comparison – to determine modern methods of combating fake news, including machine learning algorithms, natural language processing and image analysis. It was established that the main problem for increasing the effectiveness of combating disinformation is the implementation of European experience in using AI. The use of systemic and critical analysis allowed to explore the international experience of using AI tools in the field of information security, their effectiveness in detecting deepfakes and other forms of false content. A comprehensive strategy for countering disinformation in Ukraine is proposed. The proposed strategy, unlike the existing strategy, takes into account the use of artificial intelligence technologies to identify fake content in social networks and news channels, the formation of a special body to analyze digital content and the development of a digital society. The comprehensive strategy, unlike the existing ones, includes the expanded use of AI to monitor the information space, combining automated analysis with human control; the implementation of state initiatives to regulate fake content and increase the level of media literacy of the population. The research results will be useful for scientists, information security experts, journalists and state bodies involved in combating disinformation. The proposed approaches will contribute to strengthening the information protection of Ukraine and reducing the impact of fake news on society.
Online disinformation is considered a major challenge for modern democracies. It is widely understood as misleading content produced to generate profits, pursue political goals, or maliciously deceive. Our starting point is the assumption that some countries are more resilient to online disinformation than others. To understand what conditions influence this resilience, we choose a comparative cross-national approach. In the first step, we develop a theoretical framework that presents these country conditions as theoretical dimensions. In the second step, we translate the dimensions into quantifiable indicators that allow us to measure their significance on a comparative cross-country basis. In the third part of the study, we empirically examine eighteen Western democracies. A cluster analysis yields three country groups: one group with high resilience to online disinformation (including the Northern European systems, for instance) and two country groups with low resilience (including the polarized Southern European countries and the United States). In the final part, we discuss the heuristic value of the framework for comparative political communication research in the age of information pollution.
No abstract available
The concept of hoax or fake news refers to the intentional spread of false information on social media that aims to confuse and mislead readers to achieve an economic or political agenda. In addition, the increasingly diverse and numerous actors in the field of news writing and dissemination have led to the creation of news articles that need to be recognized whether they are credible or not. Furthermore, hoax can harm the social and political aspects of Indonesian society. Central Connecticut University released a study entitled The World's Most Literate Nations in 2016, where Indonesia ranked 60th out of 61 countries, indicating that Indonesian media literacy still needs to improve in critically evaluating information and distinguishing between fake news and valid news. Based on this description, the research will create the Synonym-Based Data Augmentation for Hoax Detection using the Convolutional Neural Network (CNN ) method and Easy Data Augmentation (EDA). This research resulted in an accuracy of 8,.81, indicating that it can be stated to be accurate in detecting hoax news
At present, there is a wide divergence in attitudes toward free speech in countries strongly influenced by Confucianism. Japan, Korea, and Taiwan have fairly robust rights of free expression. Mainland China does not, strongly restricting speech that the government judges threatens State interests. I argue that although traditional Confucian scholars supported many restrictions on expression, Confucian philosophers actually have good reason to want to protect expression about values. Subsequently, I consider how to address the problem of disinformation while preserving this Confucian right to free expression. I focus on the case of Taiwan, as the Confucian state facing the most serious disinformation campaigns from China. The goal of government and civil society actors has been to focus on correcting disinformation while preserving free access to information, though laws do provide for civil and even criminal penalties for intentional spread of false information. As many democratic societies are facing the problem of concerted disinformation campaigns that aim to sow confusion and increase discord among the populace, Taiwan’s successes here are worth studying. Yet the Confucian cultural background means people accept more government involvement in defining what is true and false than may be the case elsewhere, and the right to free speech is less absolute.
?Fake News" is described as misinformation, fabricated to mislead its readers without providing objective facts. The increasing use and relevance of social media in nowadays society elevates the power of the intentional spreading of false information. The terminology of ?Fake News" first gained attention of the public in the US election in 2016. Stories about election manipulation were shared among social media and were widely discussed in the web. This phenomenon will not decline with the increase importance of social media platforms nowadays. Maliciously spreading false information does not only find application in the political sphere but can also cause economic damage, which the case of the United Airlines in 2008 shows. The impact of what people believe on social media affected the share price after the spreading false information about United Airlines' bankruptcy. Also, misinformation about pandemic outbreaks can harm society as a whole by inducing panic. These instances show that a better understanding of dynamics of social media needs to be investigated, how users engage with information on social media. In contrast to mass communication models in the past, in which a gatekeeper (e.g. a news outlet) had the sole power of what the reader will see on the newspaper, nowadays users themselves emerged to gatekeepers on their own with the possibility to publish information. This opens up paths for new dynamics which are tied to the social media platforms with their sharing and liking mechanisms. The research is based on Westley and MacLean's Model of Communication with the added possibilities for the entities to interact with each other every time. Based on this model, theories of mass communication such as the ?spiral of silence" or effects like ?the sleeper effect" shall be tested and evaluated how these phenomena emerge now in the space of social media. There are several theories which explain behavior based on behavior of peers or groups, such as Herding Theory or Theory of Reasoned Action. Further elaboration is needed on how the spiral of silence is unique and how it is different from other theories. Early findings stressed the importance of the source. However, it is not yet clear how platform specific mechanisms such as liking or sharing can influence the perceived public opinion of users.
In recent years, there have been concerted efforts in the United States to spread false information against sexual and gender minorities and, consequently, social and political gains have retracted. Here, we introduce major concepts and findings in false information scholarship to consider: (1) important but largely understudied intersections between the scholarship on social movements and false information; (2) examples of how false information is deployed against sexual and gender minorities, who have recently been the targets of widespread false information campaigns; and (3) how such campaigns can potentially be mitigated. Throughout the article, we highlight how sociological insights can offer new tools for analyzing and dispelling false information. We conclude with future directions at the cross-section of scholarship on false information, social movements, and sexual and gender minorities.
No abstract available
Generative artificial intelligence (AIGC) has changed the traditional information production mechanism and has a wide range of application scenarios. At the same time, the security risks it exposes such as data leakage, false content generation, and improper utilization have also attracted widespread attention from various countries. The development, application and governance of AIGC no longer seem to be a common challenge faced by one country but by the entire international community. In order to effectively respond to the challenges of AIGC to the false information governance system, this article uses multiple methods such as literature analysis and in-depth research to elaborate on the potential risks of AIGC, and conducts an in-depth analysis of the global challenges of false information risk governance. Finally, Proposing governance paths and countermeasures from various perspectives such as supervision and ecology provides intelligence reference for the healthy development of the AIGC industry.
ABSTRACT The rapid growth of online media and communities has increased and accelerated the spread of unverified and false information (“fake news”), with significant political, economic, and social impacts, leading the European Commission to promulgate a “Code of Practice on Disinformation.” Identifying and countering such false information is time- and labor-intensive, and could benefit from the development of tools that automatically identify and flag such information. This study explores the use of deep learning techniques to detect fake news, using decreases in the incidence of emotional vocabulary and subjectivity to enhance detection accuracy, and examines potential correlations between the emotional sentiment of news content and the movement of stock price indexes. Empirical results show that deep learning techniques can be used to effectively detect fake news, with multiple trainings effectively improving detection accuracy and reducing the loss rate. In addition, increased objectivity and the use of fewer words with high emotional sentiment increases news credibility. Finally, news sentiment was found to be correlated with the movement of three of five stock indexes examined.
As of 2024, precise figures regarding the proportion of false information on Indian social media remain elusive. However, past data sheds light on the prevalence of misinformation. A January 2022 poll found that 22% of Indian social media users admitted to being deceived by fake news online. Another survey revealed that 45% encountered entirely fabricated stories in the Indian media, often with political or economic motives. Additionally, an Oxford University study noted that 54% of Indians relied on social media for truthful information. These figures, though estimations, underscore the significant presence of false information. Moreover, various definitions of “fake information” exist, encompassing dormant accounts to bot-driven spamming, highlighting the complexity of the issue. The dissemination of inaccurate information on social media platforms poses a substantial threat to online discourse integrity. This paper proposes a framework for detecting fake information utilizing common Python libraries like NumPy, Pandas, Matplotlib, NLTK, Joblib, and LDA alongside machine learning techniques. Preprocessing of textual data employs NLTK for natural language processing, followed by topic modeling with LDA to uncover latent themes. Machine learning algorithms integrated with NumPy and Pandas extract features and train models for post classification. Visualization tools like Matplotlib and Seaborn aid in data exploration and result assessment. This interdisciplinary approach demonstrates promising capabilities in identifying false information on social media platforms, contributing to ongoing efforts to combat online disinformation.
No abstract available
Disinformation, the deliberate spread of false or misleading information, presents a pressing challenge in today's global landscape. This study examines its profound impact on the international community and assesses the effectiveness of current international legal frameworks in addressing this threat. Disinformation campaigns, facilitated by digital technology, erode trust in institutions, manipulate public opinion, and exacerbate social divisions. The consequences of disinformation transcend borders, making it a formidable global issue. This research critically evaluates existing international efforts to combat disinformation, highlighting their limitations in adapting to this rapidly evolving problem. While international law acknowledges the gravity of the issue, it struggles to provide comprehensive solutions. To enhance international law's role in countering disinformation, this study emphasizes the necessity of transnational cooperation. Collaborative approaches involving nations, international organizations, and technology companies are crucial to mitigating disinformation's global impact. This study underscores the urgency of addressing disinformation's threats to global security, democracy, and social cohesion. International law must adapt to effectively combat this challenge, fostering cooperation and establishing norms to preserve peace and trust in an interconnected world.
The article is devoted to the study of a number of concepts that are often equated both in public life and in legal science and legislative practice, namely “misinformation”, “unreliable information”, “false rumours”. The opinion regarding the impossibility of their interchangeability is argued. It is determined that unverified information may turn out to be either true or unreliable (false). There are examples of the fact that unverified information about the actions of the authorities undermines confidence in the state authorities. Attention is focused on the fact that the dissemination of unverified information by mass media is not always intended to cause damage or negative consequences, but they may occur. It is emphasized that “misinformation”, “unreliable information” should be used separately, and “unreliable information” and “false rumors” can be used interchangeably, and a separate category is “unverified information”, the authenticity of which can be proven. Attention is focused on the fact that the lack of legislative consolidation of the categories “information war” and “hybrid war” both at the national and international level leads to a number of negative consequences, in particular, regarding the difficulties of bringing the aggressor state to justice.
No abstract available
ABSTRACT This study examines journalistic coverage of false information through a qualitative textual analysis of news about four popular false information cases during the 2016 and 2020 US presidential elections: The false claims that (1) the Pope endorsed Donald Trump; (2) Hillary Clinton and her campaign manager ran a pedophilia ring in a pizza shop; (3) the 2020 election was fraudulent and stolen; and (4) liberal politicians and celebrities were Satan worshippers and pedophiles. The analysis identified three dimensions of correction of false information in news coverage. The first dimension examined emphasis on the correct rather than false information. This nuanced past research by considering different practices, such as elaborating on correct information and avoiding the inclusion of incorrect information. The second dimension referred to the tone used to correct false information. The adoption of an assertive tone demonstrated journalists’ use of their voice to authoritatively correct false information. The third dimension entailed the inclusion of sources, which were used to frame correct information consistently with a diversity of audiences’ worldviews. These findings offer a framework to assess journalistic reporting on false information and illuminate strategies to stem its spread.
No abstract available
False information is always produced after the outbreak of major emergencies. Taking this into consideration, this paper discusses the behavior of multiple parties in relation to false information dissemination after major emergencies. First, a game model is constructed, using relevant knowledge of evolutionary game theory, between three parties: regulatory institutions, opinion leaders, and ordinary Internet users. Second, the model equations are solved, and the evolutionary stability strategies of each game party under different circumstances are analyzed. Third, a numerical simulation is applied to the evolutionary trends under different strategy combinations with varying parameters. The results show that the probability of each game party making ideal decisions is positively correlated with the degree of punishment imposed by regulatory institutions on opinion leaders who release false information, the reward provided by regulatory institutions on opinion leaders who release positive information, the degree of participation and satisfaction gained by Internet users in adopting positive information, the richness of authentic content released by opinion leaders, and the psychological identification of Internet users with opinion leaders. Meanwhile, the probability of each game party making ideal decisions is negatively correlated with investigation and evidence collection costs borne by opinion leaders who release positive information, the additional income for opinion leaders who have false information adopted by Internet users, the costs of Internet users’ time and energy when they adopt information released by opinion leaders, and the costs of independently judging the accuracy of information by Internet users.
To further study the issue of false information classification on social platforms after major emergencies, this study regards opinion leaders and Internet users as a false-information classification system and constructs three differential game models of decentralized, centralized, and subsidized decision-making based on optimal control and differential game theory. Comparison analyses and numerical simulations of optimal equilibrium strategies and the optimal benefit between opinion leaders and Internet users, the optimal trajectory and the steady-state value of the total volume of real information, and the optimal benefit of the false information clarification system are carried out. It is found that under centralized decision-making, equilibrium strategy and total benefit of opinion leaders and Internet users, system total benefit, and total volume of real information can achieve Pareto optimality. Although subsidized decision-making fails to achieve Pareto optimality, with opinion leaders providing cost subsidies for Internet users, it is possible to reach relative Pareto improvement compared with decentralized decision-making.
This article considers the government, opinion leaders, and Internet users to be a system for correcting false information, and it considers the problem of correcting false information that arises in the aftermath of major emergencies. We use optimal control theory and differential game theory to construct differential game models of decentralized decision-making, centralized decision-making, and subsidized decision-making. The solutions to these models and their numerical simulations show that the government, opinion leaders, and Internet users exercise cost-subsidized decision-making instead of decentralized decision-making. The equilibrium strategies, local optimal benefits, and overall optimal benefits of the system achieve Pareto improvement. Given the goal of maximizing the benefits to the system under centralized decision-making, the equilibrium results are Pareto-optimal. The research here provides a theoretical basis for dealing with the mechanism of correcting false information arising from major emergencies, and our conclusions provide methodological support for the government to effectively deal with such scenarios.
Disinformation refers to false or misleading information created with the deliberate intention to deceive and cause individual or societal harm. It is typically distinguished from misinformation, which involves falsehoods shared without deceptive intent, and from malinformation, which uses accurate information in misleading or harmful ways. Terms often used interchangeably in public debate—such as fake news, propaganda, and conspiracy theories—describe related but distinct phenomena with differing aims and methods. The term derives from the Soviet concept of dezinformatsiya, originally associated with covert influence operations and strategic deception. Over time, however, its meaning has expanded to encompass a wide range of manipulative practices enacted by both state and non-state actors. Disinformation can take textual, visual, and multimodal forms, including fabricated images and AI-generated content such as deepfakes. Motivations vary and may include political influence, economic gain, ideological mobilisation, or efforts to stigmatise specific groups. Although these practices have long historical precedents, digital and platformised communication environments have amplified their scale, speed, and persuasive potential. This entry provides a narrative overview and conceptual synthesis structured around four dimensions: the history of disinformation, the supply and diffusion mechanisms, the psychological, social, and narrative drivers, and the interventions designed to mitigate its impact.
Coordinated inauthentic behavior (CIB) is a manipulative communication tactic that uses a mix of authentic, fake, and duplicated social media accounts to operate as an adversarial network (AN) across multiple social media platforms. The article aims to clarify how CIB's emerging communication tactic “secretly” exploits technology to massively harass, harm, or mislead the online debate around crucial issues for society, like the COVID-19 vaccination. CIB's manipulative operations could be one of the greatest threats to freedom of expression and democracy in our society. CIB campaigns mislead others by acting with pre-arranged exceptional similarity and “secret” operations. Previous theoretical frameworks failed to evaluate the role of CIB on vaccination attitudes and behavior. In light of recent international and interdisciplinary CIB research, this study critically analyzes the case of a COVID-19 anti-vaccine adversarial network removed from Meta at the end of 2021 for brigading. A violent and harmful attempt to tactically manipulate the COVID-19 vaccine debate in Italy, France, and Germany. The following focal issues are discussed: (1) CIB manipulative operations, (2) their extensions, and (3) challenges in CIB's identification. The article shows that CIB acts in three domains: (i) structuring inauthentic online communities, (ii) exploiting social media technology, and (iii) deceiving algorithms to extend communication outreach to unaware social media users, a matter of concern for the general audience of CIB-illiterates. Upcoming threats, open issues, and future research directions are discussed.
This study aims to explain the diffusion pathways of coordinated inauthentic behavior during the Russia–Ukraine conflict. A dataset of 685,491 tweets containing the hashtag #russia on Twitter was used to construct a coordination network based on textual similarity and time synchronicity. By identifying leader-follower relationships, analyzing hourly time slices, and analyzing evolution metrics, four key insights were revealed. First, leaders constitute a stable core with an average of about 1741 nodes while peripheral followers fluctuate substantially, indicating a resilient core-peripheral structure. Second, diffusion advances across multiple fronts rather than remaining within single communities, with 67.05% of leader-follower ties crossing content clusters and the top 30 leaders posting across an average of 7.1 clusters and up to 9. Third, apparent synchronization is not driven by posting density alone but arises from rhythmic coupling between leaders and followers, as followers respond after an average delay of about 30 min and cluster peaks typically occur within less than 1 hour of each other. Fourth, diffusion capacity is not released once and for all but regenerates along a trajectory that moves from concentration to multiploidization and then to restructuring. Based on the results, we conceptualize coordinated inauthentic behavior as a strategically adaptive system with regenerative properties and provide governance implications.
This research explored how users interacted with inauthentic social media accounts with the goal of gaining insight into tactics employed by state-backed disinformation efforts. We combine hand coding with natural-language processing to measure the ways in which users talked with and about the accounts employed by the Russian-affiliated Internet Research Agency in the month before the 2016 U.S. Election. We find that user mentions were overwhelming supportive of the IRA accounts, belying the standard characterization of these personas as “trolls.” This pattern is particularly strong for the more ideological troll types, suggesting that a strategy of building homophilic connections with like-minded people was central to the IRA campaign. This strategy seems to work—on days that the personas’ mentions were more supportive, they received more engagement. behavior of some state affiliated, professional social media disinformation campaigns. In this study, we examined how social-media users engaged with inauthentic Twitter accounts created by the Russian IRA. Below, we will show, through an initial qualitative phase of research, that when users engaged with the ideologically oriented IRA persona, their messaging was largely supportive in nature. Building from this qualitative stage of research we conducted quantitative analysis on a larger corpus of data. Through this analysis, we found a positive relationship between the level of supportiveness that IRA accounts received for their messaging and both the level of engagement they received as measured by retweets and the growth of the IRA accounts as measured by followers. These findings illustrate the important role positive interactions with users can play in coordinated information operation tactics on social media.
The rapid rise of generative AI has fueled more sophisticated disinformation campaigns, particularly on encrypted messaging platforms like WhatsApp, Signal, and Telegram. While these platforms protect user privacy through end-to-end encryption, they pose significant challenges to traditional content moderation. Adversaries exploit this privacy to disseminate undetectable synthetic propaganda, influencing public opinion and destabilizing democratic processes without leaving a trace. This research proposes a privacy-preserving detection framework using Graph Neural Networks (GNNs) that focuses on non-content-based signals—such as user interactions, message propagation patterns, temporal behavior, and metadata. GNNs effectively capture relational and structural patterns in encrypted environments, allowing for the detection of coordinated inauthentic behavior without breaching user privacy. Experiments on a large-scale simulated dataset of encrypted messaging scenarios showed that the GNN-based framework achieved 94.2% accuracy and a 92.8% F1-score, outperforming traditional methods like random forests and LSTMs. It was particularly effective in identifying stealthy, low-frequency disinformation campaigns typically missed by conventional anomaly detectors. Positioned at the intersection of AI security, privacy, and disinformation detection, this study introduces a scalable and ethical solution for safeguarding digital spaces. It also initiates dialogue on the legal and ethical implications of behavioral surveillance in encrypted platforms and aligns with broader conversations on responsible AI, digital rights, and democratic resilience.
Digital activism, which is defined as political engagement via internet-connected technologies and social media platforms, has radically changed the ways in which movements attract supporters and spread their political messages. This paper takes an exhaustive look at digital activism at the confluence of social media platforms like Facebook, YouTube, Instagram, and Twitter with political movement-building; it assesses both its transformative potential and major limitations. Social media affordances such as reduced cost of participation, signaling identity, and algorithmic curation act as important catalysts for movement success. However, there are very serious challenges to the integrity of digital activism that this paper will note: increasing censorship, deepfakes, coordinated inauthentic behavior attacks against us all from within the system itself expose ecosystem vulnerabilities and make it harder than ever before to tell real people apart from manipulated ones. The analysis focuses on that even though the internet is a giant leap forward for civic participation, it does not take away from the fact that closing digital divides in access and usage equity as well as ensuring government accountability are still very much part of the equation. Transformative potential does not just come with technological innovation; rather it comes with paired democratic governance frameworks, digital literacy initiatives, and human rights protections online as well as offline. Keywords: Digital Activism, Social Media, Political Movement
This article presents the first systematic study of information manipulation in social media through images of Ukrainian soldiers generated by artificial intelligence (AI). Focusing on Facebook between 2022 and 2025, the research examines how fabricated soldier images created by generative neural networks have been disseminated for manipulative purposes. Several recurring thematic clusters are identified. Many of the pages spreading such content feature fictitious administrators (often abroad), display automated behavior (bots, automated commenting), and exemplify “coordinated inauthentic behavior” (CIB). For the first time, manipulations using generated images are compared across the Ukrainian and foreign segments of Facebook. The study highlights the risks of eroding trust in social networks amid the spread of the “Dead Internet,” an environment saturated with bots and synthetic content. By analyzing these practices, the article contributes to research on digital manipulation and calls for a deeper conceptualization of AI’s impact on social processes in the context of information warfare.
Context. The research is aimed at the application of artificial intelligence for the development and improvement of means of cyber warfare, in particular for combating disinformation, fakes and propaganda in the Internet space, identifying sources of disinformation and inauthentic behavior (bots) of coordinated groups. The implementation of the project will contribute to solving the important and currently relevant issue of information manipulation in the media, because in order to effectively fight against distortion and disinformation, it is necessary to obtain an effective tool for recognizing these phenomena in textual data in order to develop a further strategy to prevent the spread of such data. Objective of the study is to develop or automatic recognition of political propaganda in textual data, which is built on the basis of machine learning with a teacher and implemented using natural language processing methods. Method. Recognition of the presence of propaganda will occur at two levels: at the general level, that is, at the level of the document, and at the level of individual sentences. To implement the project, such feature construction methods as the TF-IDF statistical indicator, the “Bag of Words” vectorization model, the marking of parts of speech, the word2vec model for obtaining vector representations of words, as well as the recognition of trigger words (reinforcing words, absolute pronouns and “shiny” words). Logistic regression was used as the main modeling algorithm. Results. Machine learning models have been developed to recognize propaganda, fakes and disinformation at the document (article) and sentence level. Both model scores are satisfactory, but the model for document-level propaganda recognition performed almost 1.2 times better (by 20%). Conclusions. The created model shows excellent results in recognizing propaganda, fakes and disinformation in textual content based on NLP and machine learning methods. The analysis of the raw data showed that the propaganda recognition model at the document (article) level was able to correctly classify 6097 non-propaganda articles and 694 propaganda articles. 123 propaganda articles and 285 non-propaganda articles were misclassified. The obtained estimate of the model: 0.9433254618697041. The sentence-level propaganda recognition model successfully classified 205 propaganda articles and 1917 non-propaganda articles. The model score is: 0.7437784787942516 (but 731 articles were incorrectly classified).
Successful disinformation campaigns depend on the availability of fake social media profiles used for coordinated inauthentic behavior with networks of false accounts including bots, trolls, and sockpuppets. This study presents a scalable and unsupervised framework to identify visual elements in user profiles strategically exploited in nearly 60 influence operations, including camera angle, photo composition, gender, and race, but also more context-dependent categories like sensuality and emotion. We leverage Google’s Teachable Machine and the DeepFace Library to classify fake user accounts in the Twitter Moderation Research Consortium database, a large repository of social media accounts linked to foreign influence operations. We discuss the performance of these classifiers against manually coded data and their applicability in large-scale data analysis. The proposed framework demonstrates promising results for the identification of fake online profiles used in influence operations and by the cottage industry specialized in crafting desirable online personas.
This article analyzes the impact of digital technologies on electoral processes through the lens of European standards aimed at protecting democratic choice. The use of online platform algorithms, microtargeting of political advertising, bots, and deepfakes have become new challenges for election transparency and fairness. Following the Cambridge Analytica scandal, the European Union developed comprehensive regulatory mechanisms to counter digital threats, notably the EU Code of Practice on Disinformation and the Regulation on Transparency of Political Advertising. The article examines key instruments used to combat digital manipulations, including transparency rules for political campaigning, accountability for coordinated inauthentic behavior, and measures for detecting and countering deepfakes. Special attention is given to the role of social media platforms in spreading electoral disinformation and their accountability to society. Furthermore, the compliance of the EU legislative initiatives with international standards set by the Council of Europe and the United Nations is analyzed, alongside their significance for EU candidate countries, including Ukraine. Key recommendations are provided for adapting Ukrainian legislation to European digital regulation standards for electoral processes. The study's results underscore the necessity of a comprehensive legal approach to ensuring election integrity amid digital transformation and the impact of hybrid threats.
The landscape of information has experienced significant transformations with the rapid expansion of the internet and the emergence of online social networks. Initially, there was optimism that these platforms would encourage a culture of active participation and diverse communication. However, recent events have brought to light the negative effects of social media platforms, leading to the creation of echo chambers, where users are exposed only to content that aligns with their existing beliefs. Furthermore, malicious individuals exploit these platforms to deceive people and undermine democratic processes. To gain a deeper understanding of these phenomena, this chapter introduces a computational method designed to identify coordinated inauthentic behavior within Facebook groups. The method focuses on analyzing posts, URLs, and images, revealing that certain Facebook groups engage in orchestrated campaigns. These groups simultaneously share identical content, which may expose users to repeated encounters with false or misleading narratives, effectively forming"disinformation echo chambers."This chapter concludes by discussing the theoretical and empirical implications of these findings.
Unlike most other forms of coordinated, inauthentic behavior occurring online, the goals of state-sponsored information operations, or SSIOs, are often complex and multifaceted. These goals range from flooding conversations with a certain narrative, to increasing the public's engagement with news sources of questionable quality, to stoking tensions between ideologically opposed groups to weaken public trust. The prevailing theoretical framework for understanding SSIOs is to treat them as a social botnet: a behaviorally homogeneous cluster of coordinated activity. However, the social bot framework is both at odds with some of the behaviors observed in early SSIOs and more broadly with the wide swathe of goals these operations set out to accomplish. To examine the fit of the social bot framework in the SSIO context, we develop a novel bag-of-words based method for clustering and describing user activity traces. Applying this method to a comprehensive repository of SSIOs conducted on Twitter over the last decade, we find that SSIOs violate both the core assumption of the social bot framework, and how it is operationalized in practical work. Instead, we find that SSIOs exhibit a clear division of labor and propose cooperative work with social roles as a more effective theoretical framework for understanding SSIOs. Through applying this framework, we find that the roles that SSIO agents take on have become more stable and simple over time, which holds substantial implications for developing methods for detection of these operations in the wild.
Coordinated information operations remain a persistent challenge on social media, despite platform efforts to curb them. While previous research has primarily focused on identifying these operations within individual platforms, this study shows that coordination frequently transcends platform boundaries. Leveraging newly collected data of online conversations related to the 2024 U.S. Election across 𝕏 (formerly, Twitter), Facebook, and Telegram, we construct similarity networks to detect coordinated communities exhibiting suspicious sharing behaviors within and across platforms. Proposing an advanced coordination detection model, we reveal evidence of potential foreign interference, with Russian-affiliated media being systematically promoted across Telegram and 𝕏. Our analysis also uncovers substantial intra- and cross-platform coordinated inauthentic activity, driving the spread of highly partisan, low-credibility, and conspiratorial content. These findings highlight the urgent need for regulatory measures that extend beyond individual platforms to effectively address the growing challenge of cross-platform coordinated influence campaigns.
As digital authoritarianism evolves, new tools are needed to analyse its increasingly subtle forms. This paper adopts a longitudinal analysis of under-explored secondary sources to examine coordinated inauthentic behaviour (CIB) linked to Serbia's ruling party, Serbian Progressive Party (SNS). Unlike centralised bot farms common in autocratic regimes, SNS networks evade detection longer and more effectively mimic authentic support. Drawing on sources rarely translated into English and often at risk of censorship, we contextualise these findings within the framework of third-wave autocratisation. Our research reveals that SNS coordinates CIB through public-sector personnel co-optation, leveraging state employment to incentivise participation. This form of organisation is under-explored in CIB literature. For this reason, this study offers new insights into how modern autocracies adapt their influence strategies. Moreover, the paper highlights the need for diverse methodological approaches to improve the detection and understanding of evolving digital propaganda.
Online manipulation is a pressing concern for democracies, but the actions and strategies of coordinated inauthentic accounts, which have been used to interfere in elections, are not well understood. We analyze a five million-tweet multilingual dataset related to the 2017 French presidential election, when a major information campaign led by Russia called "#MacronLeaks" took place. We utilize heuristics to identify coordinated inauthentic accounts and detect attitudes, concerns and emotions within their tweets, collectively known as socio-linguistic characteristics. We find that coordinated accounts retweet other coordinated accounts far more than expected by chance, while being exceptionally active just before the second round of voting. Concurrently, socio-linguistic characteristics reveal that coordinated accounts share tweets promoting a candidate at three times the rate of non-coordinated accounts. Coordinated account tactics also varied in time to reflect news events and rounds of voting. Our analysis highlights the utility of socio-linguistic characteristics to inform researchers about tactics of coordinated accounts and how these may feed into online social manipulation.
No abstract available
The 2015–2017 Russian Internet Research Agency (IRA)’s coordinated information operation is one of the earliest and most studied of the social media age. A set of 38 city-specific inauthentic “newsfeeds” made up a large, underanalyzed part of its English-language output. We label 1,000 tweets from the IRA newsfeeds and a matched set of real news sources from those same cities with up to five labels indicating the tweet represents a world in unrest and, if so, of what sort. We train a natural language classifier to extend these labels to 268 k IRA tweets and 1.13 million control tweets. Compared to the controls, tweets from the IRA were 34% more likely to represent unrest, especially crime and identity danger, and this difference jumped to about twice as likely in the months immediately before the election. Agenda setting by media is well-known and well-studied, but this weaponization by a coordinated information operation is novel.
Technology has reshaped political communication, allowing fake engagement to drive real influence in the democratic process. Hyperactive social media users, who are over-proportionally active in relation to the mean, are the new political activists, spreading partisan content at scale on social media platforms. Using The Authenticity Matrix tool, this study revealed Facebook accounts of hyperactive users exhibiting inauthentic behaviour that were used during the electoral campaign (May 10, 2024, to June 8, 2024) for the 2024 election of Romanian members of the European Parliament. The results indicate that, for some posts, up to 45% of shares were made by hyperactive users (four or more shares per post by the same account) and 33.9% by super-active users (10 or more times). This type of online behavior is considered by Meta as manipulation of “public opinion,” “political discussion,” and “public debate,” and Meta’s Community Standards is committed to preventing such behavior in the context of elections. Another key contribution of this research is the identification of dominant characteristics of hyperactive user accounts, using information publicly available on their social media profile, which provides insights into their specific features and helps users better identify them on social media. The article highlights that online social network platforms condemn these manipulative practices in theory, but they don’t take sufficient measures to effectively reduce them in order to limit their impact on our societies.
Astroturfing, trolling, bots, false amplifiers and social media accounts with inauthentic behavior are used in online political communication even if they have a real-world dangerous effect on democratic systems. Some of these activities are involving users that are over-proportionally active in relation to the mean. This explanatory research sought to determine whether such hyperactive users are utilized to share political posts to create an impression of popularity for a specific political message and, thus, to influence the recommendation algorithms of social networks in order to increase the exposure of political messages. In this study, I analyzed the most shared posts during the election campaign on the official Facebook pages of the first three ranked candidates in the 2019 Romanian presidential election. The research revealed an average of 18.3% of shares were made by hyperactive users on their own timeline or in different Facebook groups, with users that shared the same post for 69 times. Furthermore, I identified some of the characteristics of hyperactive users’ accounts based on their public social media profile, which may helps understand the specifics of these accounts. The results show that election communication involves activities considered by Facebook to be a practice of „manipulating public opinion“ and of „manipulating political discussion“ (Weedon et al., 2017, p. 5).
Organized attempts to manipulate public opinion during election run-ups have dominated online debates in the last few years. Such attempts require numerous accounts to act in coordination to exert influence. Yet, the ways in which coordinated behavior surfaces during major online political debates is still largely unclear. This study sheds light on coordinated behaviors that took place on Twitter (now X) during the 2020 US Presidential Election. Utilizing state-of-the-art network science methods, we detect and characterize the coordinated communities that participated in the debate. Our approach goes beyond previous analyses by proposing a multifaceted characterization of the coordinated communities that allows obtaining nuanced results. In particular, we uncover three main categories of coordinated users: ( i ) moderate groups genuinely interested in the electoral debate, ( ii ) conspiratorial groups that spread false information and divisive narratives, and ( iii ) foreign influence networks that either sought to tamper with the debate or that exploited it to publicize their own agendas. We also reveal a large use of automation by far-right foreign influence and conspiratorial communities. Conversely, left-leaning supporters were overall less coordinated and engaged primarily in harmless, factual communication. Our results also showed that Twitter was effective at thwarting the activity of some coordinated groups, while it failed on some other equally suspicious ones. Overall, this study advances the understanding of online human interactions and contributes new knowledge to mitigate cyber social threats.
Online information operations (IOs) refer to organized attempts to tamper with the regular flow of information and to influence public opinion. Coordinated online behavior is a tactic frequently used by IO perpetrators to boost the spread and outreach of their messages. However, the exploitation of coordinated behavior within large-scale IOs is still largely unexplored. Here, we build a novel dataset comprising around 624K users and 4M tweets to study how online coordination was used in two recent IOs carried out on Twitter. We investigate the interplay between coordinated behavior and IOs with state-of-the-art network science and coordination detection methods, providing evidence that the perpetrators of both IOs were indeed strongly coordinated. Furthermore, we propose quantitative indicators and analyses to study the different patterns of coordination, uncovering a malicious group of users that managed to hold a central position in the discussion network, and others who remained at the periphery of the network, with limited interactions with genuine users. The nuanced results enabled by our analysis provide insights into the strategies, development, and effectiveness of the IOs. Overall, our results demonstrate that the analysis of coordinated behavior in IOs can contribute to safeguarding the integrity of online platforms.
Social media platforms can play a pivotal role in shaping public opinion during times of crisis and controversy. The COVID-19 pandemic resulted in a large amount of dubious information being shared online. In Belgium, a crisis emerged during the pandemic when a soldier (Jürgen Conings) went missing with stolen weaponry after threatening politicians and virologists. This case created further division and polarization in online discussions. In this paper, we develop a methodology to study the potential of coordinated spread of incorrect information online. We combine network science and content analysis to infer and study the social network of users discussing the case, the news websites shared by those users, and their narratives. Additionally, we examined indications of bots or coordinated behavior among the users. Our findings reveal the presence of distinct communities within the discourse. Major news outlets, conspiracy theory websites, and anti-vax platforms were identified as the primary sources of (dis)information sharing. We also detected potential coordinated behavior and bot activity, indicating possible attempts to manipulate the discourse. We used the rapid semantic similarity network for the analysis of text, but our approach can be extended to the analysis of images, videos, and other types of content. These results provide insights into the role of social media in shaping public opinion during times of crisis and underscore the need for improved strategies to detect and mitigate disinformation campaigns and online discourse manipulation. Our research can aid intelligence community members in identifying and disrupting networks that spread extremist ideologies and false information, thereby promoting a more informed and resilient society.
Social media has become a crucial conduit for the swift dissemination of information during global crises. However, this also paves the way for the manipulation of narratives by malicious actors. This research delves into the interaction dynamics between coordinated (malicious) entities and organic (regular) users on Twitter amidst the Gaza conflict. Through the analysis of approximately 3.5 million tweets from over 1.3 million users, our study uncovers that coordinated users significantly impact the information landscape, successfully disseminating their content across the network: a substantial fraction of their messages is adopted and shared by organic users. Furthermore, the study documents a progressive increase in organic users' engagement with coordinated content, which is paralleled by a discernible shift towards more emotionally polarized expressions in their subsequent communications. These results highlight the critical need for vigilance and a nuanced understanding of information manipulation on social media platforms.
In the intricate landscape of social media, genuine content dissemination may be altered by a number of threats. Coordinated Behavior (CB), defined as orchestrated efforts by entities to deceive or mislead users about their identity and intentions, emerges as a tactic to exploit or manipulate online discourse. This study delves into the relationship between CB and toxic conversation on X (formerly known as Twitter). Using a dataset of 11 million tweets from 1 million users preceding the 2019 UK general election, we show that users displaying CB typically disseminate less harmful content, irrespective of political affiliation. However, distinct toxicity patterns emerge among different coordinated cohorts. Compared to their non-CB counterparts, CB participants show marginally higher toxicity levels only when considering their original posts. We further show the effects of CB-driven toxic content on non-CB users, gauging its impact based on political leanings. Our findings suggest that CB only has a limited impact on the toxicity of digital discourse.
Coordinated online behaviors are an essential part of information and influence operations, as they allow a more effective disinformation's spread. Most studies on coordinated behaviors involved manual investigations, and the few existing computational approaches make bold assumptions or oversimplify the problem to make it tractable. Here, we propose a new network-based framework for uncovering and studying coordinated behaviors on social media. Our research extends existing systems and goes beyond limiting binary classifications of coordinated and uncoordinated behaviors. It allows to expose different coordination patterns and to estimate the degree of coordination that characterizes diverse communities. We apply our framework to a dataset collected during the 2019 UK General Election, detecting and characterizing coordinated communities that participated in the electoral debate. Our work conveys both theoretical and practical implications and provides more nuanced and fine-grained results for studying online information manipulation.
Information Operations (IOs) pose a significant threat to the integrity of democratic processes, with the potential to influence election-related online discourse. In anticipation of the 2024 U.S. presidential election, we present a study aimed at uncovering the digital traces of coordinated IOs on $\mathbb{X}$ (formerly Twitter). Using our machine learning framework for detecting online coordination, we analyze a dataset comprising election-related conversations on $\mathbb{X}$ from May 2024. This reveals a network of coordinated inauthentic actors, displaying notable similarities in their link-sharing behaviors. Our analysis shows concerted efforts by these accounts to disseminate misleading, redundant, and biased information across the Web through a coordinated cross-platform information operation: The links shared by this network frequently direct users to other social media platforms or suspicious websites featuring low-quality political content and, in turn, promoting the same $\mathbb{X}$ and YouTube accounts. Members of this network also shared deceptive images generated by AI, accompanied by language attacking political figures and symbolic imagery intended to convey power and dominance. While $\mathbb{X}$ has suspended a subset of these accounts, more than 75% of the coordinated network remains active. Our findings underscore the critical role of developing computational models to scale up the detection of threats on large social media platforms, and emphasize the broader implications of these techniques to detect IOs across the wider Web.
Information operations (IOs) pose a significant threat to the integrity of democratic processes, with the potential to influence election-related online discourse. In anticipation of the 2024 U.S. presidential election, we present a study aimed at uncovering the digital traces of coordinated IOs on X (formerly Twitter). Using our machine learning framework for detecting online coordination, we analyze a dataset comprising election-related conversations on X from May to July 2024. This reveals a network of coordinated inauthentic actors, displaying notable similarities in their link-sharing behaviors. Our analysis shows concerted efforts by these accounts to disseminate misleading, redundant, and biased information across the Web through a coordinated cross-platform information operation: The links shared by this network frequently direct users to other social media platforms or mock news sites featuring low-quality political content and, in turn, promoting the same X and YouTube accounts. Members of this network also shared deceptive images generated by AI, accompanied by language attacking political figures and symbolic imagery intended to convey power and dominance. While X has suspended or restricted a subset of these accounts, 75 percent ofthe coordinated network remains active, garnering substantial traction over time: The suspicious Web sites promoted by this coordinated network are shared thousands of times per day by the X user base, further amplifying their reach and potential impact. Our findings underscore the critical role of developing computational models to scale up the detection of threats on large social media platforms, and emphasize the broader implications of these techniques to detect IOs across the wider Web.
This rapid review synthesizes current research on the influence of artificial intelligence (AI) technologies on voter behavior and electoral outcomes during Brazil's 2022 presidential election. Through systematic analysis of thirteen studies, we identified three primary mechanisms through which AI shapes electoral processes: sentiment analysis and opinion tracking, algorithmic amplification of polarizing content, and automated disinformation campaigns. AI-powered sentiment analysis achieved up to 90% accuracy in tracking voter preferences and frequently aligned with electoral outcomes. Bot networks and coordinated campaigns significantly influenced information dissemination, with regional variations mirroring actual voting patterns—positive sentiment for Bolsonaro concentrated in the Southeast and support for Lula in the Northeast. These findings suggest that AI technologies fundamentally reshape democratic discourse by enabling sophisticated political monitoring while creating new vulnerabilities for electoral manipulation.
The paper proposes an advanced approach for identifying disinformation on Telegram channels related to the Russo-Ukrainian conflict, utilizing state-of-the-art (SOTA) deep learning techniques and transfer learning. Traditional methods of disinformation detection, often relying on manual verification or rule-based systems, are increasingly inadequate in the face of rapidly evolving propaganda tactics and the massive volume of data generated daily. To address these challenges, the proposed system employs deep learning algorithms, including LLM models, which are fine-tuned on a custom dataset encompassing verified disinformation and legitimate content. The paper's findings indicate that this approach significantly outperforms traditional machine learning techniques, offering enhanced contextual understanding and adaptability to emerging disinformation strategies.
The growing development of capabilities, techniques, and technologies associated with Artificial Intelligence opens a new scenario full of opportunities for Foreign Information Manipulation and Interference, which presents an added challenge for detecting and countering disinformation content. Within the preventive strategies to combat these threats, digital literacy emerges as essential. The aim of the research, whose method is detailed here, is to proactively identify emerging technological trends that will shape the disinformation and FIMI ecosystem over the next 10 years, and to propose competency-based training and curricular models to address these challenges in their latent phase.• The article introduces a Delphi study model designed to identify training needs to effectively combat disinformation and FIMI in the context of generative AI and other emerging technologies.• A detailed guide of the conducted study is provided, intended to be replicated in future research or to support the development of subsequent studies aimed at understanding the training needs arising from new technologies applied to misinformation.• Questionnaires and datasets are provided with results that enable comparative, longitudinal, and replication studies
New generative artificial intelligence (GenAI) technology could have devastating consequences on our democracy because it can be easily used to spread disinformation at scale while simultaneously personalizing propaganda to demographics or individuals. The threat is significant – Large-scale GenAI-based disinformation campaigns can sway public opinion, shape political events, or even compromise the integrity of elections. One method for defending against large-scale GenAI disinformation is to build tools for autonomously detecting AI-generated content. In this article, we evaluate state-of-the-art AI detection tools. Additionally, we propose a novel AI content detection method which demonstrates up to a 48% improvement in accuracy (over existing tools) for autonomously detecting AI-generated content.
The rise of digital platforms has facilitated the rapid spread of disinformation, which poses significant social, political, and economic challenges. Knowledge graphs (KGs) are emerging as effective tools for enhancing the accuracy, interpretability, and scalability of fake news detection systems, addressing limitations in traditional machine learning-based approaches that rely pri-marily on linguistic analysis. This work contains a literature review that synthesizes findings from recent studies on the application of KGs in disinformation detection. We identify how KGs improve detection by encoding real relationships, analyzing context, and enhancing model interpretability, while also discussing current limitations in scalability, data completeness, and contextual adaptability. The reviewed studies underscore the need for future research focusing on scalable, real-time, and cross-linguistic KG models to bolster disinformation detection capabilities globally. Moreover, we present preliminary results of two use cases, showcasing a methodology for constructing KGs that can serve as useful tools to fight against disinformation spread.
In the era of the digital world, while freedom of speech has been flourishing, it has also paved the way for disinformation, causing detrimental effects on society. Legal and ethical criteria are insufficient to address this concern, thus necessitating technological intervention. This paper presents a novel method leveraging pre-finetuning concept for efficient detection and removal of disinformation that may undermine society, as deemed by judicial entities. We argue the importance of detecting this type of disinformation and validate our approach with real-world data derived from court orders. Following a study that highlighted four areas of interest for rumor analysis, our research proposes the integration of a fine-grained sentiment analysis task in the pre-finetuning phase of language models, using the GoEmotions dataset. Our experiments validate the effectiveness of our approach in enhancing performance significantly. Furthermore, we explore the application of our approach across different languages using multilingual language models, showing promising results. To our knowledge, this is the first study that investigates the role of sentiment analysis pre-finetuning in disinformation detection.
Since the technology for generating synthetic media content became available to a wider audience in 2022, the social and communication sciences face the urgent question of how these technologies can be used to spread disinformation and how well recipients are equipped to deal with this risk. Research so far has focused primarily on the phenomenon of deepfakes, which mostly refers to visual media generated or modified by artificial intelligence. Most studies aim to test how well recipients can detect such deepfakes, and they generally conclude that recipients are rather poor at detecting them. In contrast, this analysis focuses on the broader concept of synthetic disinformation, which includes all forms of AI-generated content for the purpose of deception. We investigate the process of how actors with professional expertise in the field of disinformation try to detect AI-generated disinformation in text, visual and audio content and which strategies and resources they employ. To gauge an upper bound for societal preparedness, we conducted guided interviews with 41 actors in elite positions from four sectors of German society (politics, corporations, media and administration) and asked them about their strategies for detecting synthetic disinformation in text, visual and audio content. The respondents apply different detection strategies for the three media formats. The data shows substantial differences between the four groups when it comes to detection strategies. Only the media professionals consistently describe analytical, rather than simply intuitive, methods for verification.
In today’s rapidly evolving digital age, disinformation poses a significant threat to public sentiment and socio-political dynamics. To address this, we introduce a new dataset “DeFaktS”, designed to understand and counter disinformation within German media. Distinctively curated across various news topics, DeFaktS offers an unparalleled insight into the diverse facets of disinformation. Our dataset, containing 105,855 posts with 20,008 meticulously labeled tweets, serves as a rich platform for in-depth exploration of disinformation’s diverse characteristics. A key attribute that sets DeFaktS apart is, its fine-grain annotations based on polarized categories. Our annotation framework, grounded in the textual characteristics of news content, eliminates the need for external knowledge sources. Unlike most existing corpora that typically assign a singular global veracity value to news, our methodology seeks to annotate every structural component and semantic element of a news piece, ensuring a comprehensive and detailed understanding. In our experiments, we employed a mix of classical machine learning and advanced transformer-based models. The results underscored the potential of DeFaktS, with transformer models, especially the German variant of BERT, exhibiting pronounced effectiveness in both binary and fine-grained classifications.
Along with the times, false information easily spreads, including in Indonesia. In Press Release No.485/HM/KOMINFO/12/2021 the Ministry of Communication and Information has cut off access to 565,449 negative content and published 1,773 clarifications on hoax and disinformation content. Research has been carried out regarding this matter, but it is necessary to classify fake news into disinformation and hoaxes. This study presents a comparison between our proposed model, which is an ensemble of shallow learning predictive models, namely Random Forest, Passive Aggressive Classifier, and Cosine Similarity, and the deep learning model that uses BERT-Indo for classification. Both models are trained using equivalent datasets, which contain 8757 news, consisting of 3000 valid news, 3000 hoax news, and 2757 disinformation news. These news were obtained from websites such as CNN, Kompas, Detik, Kominfo, Temanggung Mediacenter, Hoaxdb Aceh, Turnback Hoax, and Antara, which were then cleaned from all unnecessary substances, such as punctuation marks, numbers, Unicode, stopwords, and suffixes using the Sastrawi library. At the benchmarking stage, the shallow learning model is evaluated to increase accuracy by applying ensemble learning combined using hard voting. This results in higher values, with an accuracy of 98.125%, precision of 98.2%, F-1 score of 98.1%, and recall of 98.1%, compared to the BERT-Indo model which only achieved 96.918% accuracy, 96.069% precision, 96.937% F-1 score, and 96.882% recall. Based on the accuracy value, shallow learning model is superior to deep learning model. This machine learning model is expected to be used to combat the spread of hoaxes and disinformation in Indonesian news. Additionally, with this research, false news can be classified in more detail, both as hoaxes and disinformation
Disinformation has become increasingly relevant in recent years both as a political issue and as object of research. Datasets for training machine learning models, especially for other languages than English, are sparse and the creation costly. Annotated datasets often have only binary or multiclass labels, which provide little information about the grounds and system of such classifications. We propose a novel textual dataset GerDISDETECT for German disinformation. To provide comprehensive analytical insights, a fine-grained taxonomy guided annotation scheme is required. The goal of this dataset, instead of providing a direct assessment regarding true or false, is to provide wide-ranging semantic descriptors that allow for complex interpretation as well as inferred decision-making regarding information and trustworthiness of potentially critical articles. This allows this dataset to be also used for other tasks. The dataset was collected in the first three months of 2022 and contains 39 multilabel classes with 5 top-level categories for a total of 1,890 articles: General View (3 labels), Offensive Language (11 labels), Reporting Style (15 labels), Writing Style (6 labels), and Extremism (4 labels). As a baseline, we further pre-trained a multilingual XLM-R model on around 200,000 unlabeled news articles and fine-tuned it for each category.
No abstract available
In light of the growing impact of disinformation on social, economic, and political landscapes, accurate and efficient identification methods are increasingly critical. This paper introduces HyperGraphDis, a novel approach for detecting disinformation on Twitter that employs a hypergraph-based representation to capture (i) the intricate social structures arising from retweet cascades, (ii) relational features among users, and (iii) semantic and topical nuances. Evaluated on four Twitter datasets -- focusing on the 2016 U.S. presidential election and the COVID-19 pandemic -- HyperGraphDis outperforms existing methods in both accuracy and computational efficiency, underscoring its effectiveness and scalability for tackling the challenges posed by disinformation dissemination. HyperGraphDis displays exceptional performance on a COVID-19-related dataset, achieving an impressive F1 score (weighted) of approximately 89.5%. This result represents a notable improvement of around 4% compared to the other state-of-the-art methods. Additionally, significant enhancements in computation time are observed for both model training and inference. In terms of model training, completion times are accelerated by a factor ranging from 2.3 to 7.6 compared to the second-best method across the four datasets. Similarly, during inference, computation times are 1.3 to 6.8 times faster than the state-of-the-art.
Disinformation refers to false information deliberately spread to influence the general public, and the negative impact of disinformation on society can be observed in numerous issues, such as political agendas and manipulating financial markets. In this paper, we identify prevalent challenges and advances related to automated disinformation detection from multiple aspects and propose a comprehensive and explainable disinformation detection framework called DISCO. It leverages the heterogeneity of disinformation and addresses the opaqueness of prediction. Then we provide a demonstration of DISCO on a real-world fake news detection task with satisfactory detection accuracy and explanation. The demo video and source code of DISCO is now publicly available https://github.com/DongqiFu/DISCO. We expect that our demo could pave the way for addressing the limitations of identification, comprehension, and explainability as a whole.
Social media is not only used for social communication, but also for the comprehensive and effective dissemination of news and information. Twitter is one of the largest social media also used to spread news and information. Information published on Twitter may not always be verifiable. This can lead to disinformation being spread on Twitter. The spread of disinformation on social media has become a growing problem, especially around the time of the presidential election. The purpose of this study is to use the IndoBERT model to identify and minimize the spread of disinformation on Twitter related to the 2024 Indonesian presidential election. This study was conducted in several phases including dataset collection, preprocessing, data labeling, word embedding with Word2Vec, classification with IndoBERT, validation and evaluation with K-Fold Cross Validation. The results show that using IndoBERT in combination with NLTK Tokenizer and BERT AutoTokenizer yields promising results in minimizing the spread of disinformation on social media. Accuracy results achieved were 85% when using IndoBERT with BERT AutoTokenizer and 87% when using IndoBERT with NLTK Tokenizer and BERT AutoTokenizer. Overall, this study demonstrates the effectiveness of using advanced NLP models like IndoBERT in detecting and minimizing the spread of disinformation on social media.
The spread of disinformation and propagandistic content poses a threat to societal harmony, undermining informed decision-making and trust in reliable sources. Online platforms often serve as breeding grounds for such content, and malicious actors exploit the vulnerabilities of audiences to shape public opinion. Although there have been research efforts aimed at the automatic identification of disinformation and propaganda in social media content, there remain challenges in terms of performance. The ArAIEval shared task aims to further research on these particular issues within the context of the Arabic language. In this paper, we discuss our participation in these shared tasks. We competed in subtasks 1A and 2A, where our submitted system secured positions 9th and 10th, respectively. Our experiments consist of fine-tuning transformer models and using zero- and few-shot learning with GPT-4.
The Russian Doppelganger campaign was a flop. It tried to target European governments and institutions with fake news and cloned websites, but its measurable impact on real users—views, likes, or shares—was minimal [1]. However, as part of ongoing efforts to influence Western media, this campaign contributes to altering online discourse and normalizing hate speech. The potential harm from such attacks has been proven to be even more extreme. Such threats require international collaboration to identify and effectively counter such campaigns. The popularization of artificial intelligence (AI) has accelerated the spread of fake news. On the other hand, AI can help us fight back even better. Leveraging AI-driven techniques—such as Natural Language Processing (NLP), multimedia analysis, and network analysis—is crucial in this fight, as well as a common language to describe hybrid attacks. Therefore, our discussion relies on the DISARM Framework, a disinformation-focused counterpart to the MITRE ATT&CK framework, designed to standardise disinformation-related terminology and analytical methods [2]. This paper is focused on a key tactic of disinformation: overwhelming the target, a strategy evident in many social engineering plots. Be it news or messages, the 21st century forces is overfilled with content, forcing people into constant stress, weakening their decision-making, and increasing their susceptibility to manipulation. We discuss the practical overview of disinformation detection. In this discussion, we include uncertainty quantification (UQ) as a groundbreaking tool to counteract this challenge (a solution introduced by Puczynska et al. [3]). UQ enhances reliability, explainability, and adaptability in disinformation detection systems, as it enables estimation of model confidence. Our framework demonstrates the potential of AI-driven systems to counteract disinformation through multimodal analysis and cross-platform collaboration while maintaining transparency and ethical integrity. We underscore the urgency of integrating UQ into fake news detection methodologies to address the rapid evolution of disinformation campaigns. The paper concludes by outlining future directions for developing scalable, transparent, and resilient systems to safeguard information integrity and societal trust in an increasingly digital age.
The emergence of social media platforms has amplified the dissemination of false information in various forms. Social media gives rise to virtual societies by providing freedom of expression to users in a democracy. Due to the presence of echo chambers on social media, social science studies play a vital role in the spread of false news. To this aim, we provide a comprehensive framework that is adapted from several scholarly studies. The framework is capable of detecting information into various types, namely real, disinformation and satire based on authenticity as well as intention. The process highlights the use of interdisciplinary approaches derived from fundamental theories of social science and integrating them with modern computational tools and techniques. Few of these theories claim that malicious users suggest writing fabricated content in a different style to attract the audience. Style-based methods evaluate the intention i.e., the content is written with an intent to mislead the audience or not. However, the writing style can be deceptive. Thus, it is important to involve user-oriented social information to improve model strength. Therefore, the paper used an integrated approach by combining style based and propagation-based features with a total of thirty-one features. The extracted features are divided into ten categories: relative frequency, quantity, complexity, uncertainty, sentiment, subjectivity, diversity, informality, additional, and popularity. The features have been iteratively utilized by supervised classifiers and then selected the best-correlated ones using the ANOVA test. Our experimental results have shown that the selected features are able to distinguish real from disinformation and satirical news. It has been observed that the Ensemble machine learning model outperformed other models over the developed multi-labelled corpus.
As recent events have demonstrated, disinformation spread through social networks can have dire political, economic and social consequences. Detecting disinformation must inevitably rely on the structure of the network, on users particularities and on event occurrence patterns. We present a graph data structure, which we denote as a meta-graph, that combines underlying users' relational event information, as well as semantic and topical modeling. We detail the construction of an example meta-graph using Twitter data covering the 2016 US election campaign and then compare the detection of disinformation at cascade level, using well-known graph neural network algorithms, to the same algorithms applied on the meta-graph nodes. The comparison shows a consistent 3-4% improvement in accuracy when using the meta-graph, over all considered algorithms, compared to basic cascade classification, and a further 1% increase when topic modeling and sentiment analysis are considered. We carry out the same experiment on two other datasets, HealthRelease and HealthStory, part of the FakeHealth dataset repository, with consistent results. Finally, we discuss further advantages of our approach, such as the ability to augment the graph structure using external data sources, the ease with which multiple meta-graphs can be combined as well as a comparison of our method to other graph-based disinformation detection frameworks.
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms. Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications. By leveraging an already-available analyst as a human-in-the-loop, however, the canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system. This paper aims to determine which of these techniques is best suited for this purpose and how each technique might best be used towards this end. Training datasets of the same size and nearly identical neural architectures (a BERT transformer as a word embedder with a single feed-forward layer thereafter) are used for each approach, which are then tested on sentiment- and stance-specific datasets to establish a baseline of how well each method can be used to do the other tasks. Four different datasets relating to COVID-19 disinformation are used to test the ability of each technique to detect disinformation on a topic that did not appear in the training data set. Quantitative and qualitative results from these tests are then used to provide insight into how best to employ these techniques in practice.
Disinformation through fake news is an ongoing problem in our society and has become easily spread through social media. The most cost- and time-effective way to filter these large amounts of data is to use a combination of human and technical interventions to identify it. From a technical perspective, Natural Language Processing (NLP) is widely used in detecting fake news. Social media companies use NLP techniques to identify the fake news and warn their users, but fake news may still slip through undetected. It is especially a problem in more localised contexts (outside the United States of America). How do we adjust fake news detection systems to work better for local contexts such as in South Africa. In this work we investigate fake news detection on South African websites. We curate a dataset of South African fake news and then train detection models. We contrast this with using widely available fake news datasets (from mostly USA website). We also explore making the datasets more diverse by combining them and observe the differences in behaviour in writing between nations’ fake news using interpretable machine learning.
No abstract available
No abstract available
The proliferation of social media bots and fake accounts has significantly disrupted information ecosystems, posing substantial challenges in detecting and mitigating disinformation. While machine learning and deep learning models have shown varying levels of success on platforms like Twitter and Facebook, they often fail to account for region-specific nuances critical for effective bot detection. Facebook and Twitter have been widely used in disinformation research due to their large user bases and historically open API access, facilitating large-scale data collection. This study addresses these gaps by proposing a hybrid detection framework tailored to Romania's disinformation landscape. Each society is shaped by its unique historical, cultural, linguistic, and geopolitical factors, influencing how disinformation spreads and resonates with different audiences. The proposed approach emphasizes well-established narratives that significantly influence vulnerable populations, including young adults with limited capacity for fact-checking and older adults with low levels of digital literacy. This research will begin by reviewing existing literature on bot detection methodologies and narrative analysis, identifying their strengths, limitations, and applicability to regional contexts. By integrating conventional detection methodologies with a refined analysis of niche and high-risk narratives, this research investigates how disinformation campaigns gain momentum and escalate, providing a deeper understanding of their dynamics and impact. The results will reveal patterns and strategies employed in the propagation of disinformation, contributing to the development of more targeted and effective detection systems. This method also facilitates the early identification of emerging disinformation clusters, offering timely and proactive intervention opportunities. This paper is part of a broader PhD research program centered on analysing narratives and narrative strategies in disinformation, with all findings supporting this overarching goal. The findings emphasize the importance of regional customization in bot detection frameworks, particularly for countries like Romania, where disinformation leverages historical, cultural, and socio-political triggers. These insights will strengthen the resilience of information ecosystems and hold significant value for cybersecurity professionals, social media platforms, and policymakers dedicated to combating manipulation and fostering a more secure digital environment.
In the digital age, hoaxes or false information are a significant challenge, as they can harm public comprehension, form inaccurate opinions, and endanger the health and safety of individuals. Artificial intelligence technology, particularly large language models (LLMs) like Llama 3, provides an innovative solution to these challenges. A sophisticated generative model with superior natural language processing capabilities, Llama 3 enables the effective detection and clarification of hoaxes. A dataset that is seven times larger than its antecedent, Llama 2, is utilized to train this model. The dataset has a token capacity of up to 128K and a context length of up to 8 K. By utilizing these capabilities, Llama 3 is capable of comprehending context, offering responses that are grounded in scientific data, and reducing response errors. Educational chatbots, interactive web platforms, and mobile applications that are based on Llama 3 can be implemented. This model effectively identifies and clarifies false information regarding cosmic rays that are purportedly hazardous through the presentation of pertinent scientific facts, as demonstrated by case studies. Llama 3's capabilities encompass its capacity to modify parameters to generate valid and pertinent responses. This renders it a critical instrument for bolstering community resilience to the dissemination of falsehoods, as well as digital literacy and awareness. Llama 3, which is open source, facilitates global collaboration in the development of a more secure and trustworthy information ecosystem.
The proliferation of disinformation and the emergence of echo chambers on social media pose serious challenges for democratic discourse. In this paper, we introduce a hybrid computational framework that fuses network topology with usergenerated hashtag semantics to map and measure candidate echo chamber structures at scale. Using the full Italian Twitter Firehose from February 23 to August 31, 2022, we built a heterogeneous graph of user and hashtag nodes linked by Twitter interactions. We then ran the Leiden algorithm over ten resolution settings and evaluated community quality via normalized mutual information (NMI), variation of information (VI), adjusted Rand index (ARI), coverage, and the external–internal index (EI). Our analysis reveals that the community structure is remarkably stable between resolution values of 0.5 and 0.6, where normalized mutual information approaches 0.999, the adjusted Rand index climbs to 0.89, and the variation of information stays below 0.05, marking this interval as optimal for coherent, semantically aligned groups. Collapsing all clusters smaller than 30 nodes distills the network into 1000 robust communities that still cover more than 54% of the total edge weight and maintain a median size of 42–45 users while further enhancing ARI and VI. Finally, a second-level Leiden pass on the top 500 communities drives the EI index down to 0.0411, indicating maximal intragroup connectivity and the strongest echo chamber effects. Together, these results show that our dual structural–semantic framework not only uncovers stable, multilevel community hierarchies but also indicates the precise scale at which echo chambers intensify, paving the way for real-time disinformation monitoring and targeted mitigation.
Threat actors continue to exploit geopolitical and global public events launch aggressive campaigns propagating disinformation over the Internet. In this paper we extend our prior research in detecting disinformation using psycholinguistic and computational linguistic processes linked to deception and cybercrime to gain an understanding of the features impact the predictive outcome of machine learning models. In this paper we attempt to determine patterns of deception in disinformation in hybrid models trained on disinformation and scams, fake positive and negative online reviews, or fraud using the eXtreme Gradient Boosting machine learning algorithm. Four hybrid models are generated which are models trained on disinformation and fraud (DIS+EN), disinformation and scams (DIS+FB), disinformation and favorable fake reviews (DIS+POS) and disinformation and unfavorable fake reviews (DIS+NEG). The four hybrid models detected deception and disinformation with predictive accuracies ranging from 75% to 85%. The outcome of the models was evaluated with SHAP to determine the impact of the features.
Disinformation refers to false rumors deliberately fabricated for certain political or economic conspiracies. So far, how to prevent online disinformation propagation is still a severe challenge. Refutation, media censorship, and social bot detection are three popular approaches to stopping disinformation, which aim to clarify facts, intercept the spread of existing disinformation, and quarantine the source of disinformation, respectively. In this paper, we study the collaboration of the above three countermeasures in defending disinformation. Specifically, considering an online social network, we study the most cost-effective dynamic budget allocation (DBA) strategy for the three methods to minimize the proportion of disinformation-supportive accounts on the network with the lowest expenditure. For convenience, we refer to the search for the optimal DBA strategy as the DBA problem. Our contributions are as follows. First, we propose a disinformation propagation model to characterize the effects of different DBA strategies on curbing disinformation. On this basis, we establish a trade-off model for DBA strategies and reduce the DBA problem to an optimal control model. Second, we derive an optimality system for the optimal control model and develop a heuristic numerical algorithm called the DBA algorithm to solve the optimality system. With the DBA algorithm, we can find possible optimal DBA strategies. Third, through numerical experiments, we estimate key model parameters, examine the obtained DBA strategy, and verify the effectiveness of the DBA algorithm. Results show that the DBA algorithm is effective.
The Internet and social media have altered how individuals access news in the age of instantaneous information distribution. While this development has increased access to information, it has also created a significant problem: the spread of fake news and information. Fake news is rapidly spreading on digital platforms, which has a negative impact on the media ecosystem, public opinion, decision-making, and social cohesion. Natural Language Processing(NLP), which offers a variety of approaches to identify content as authentic, has emerged as a potent weapon in the growing war against disinformation. This paper takes an in-depth look at how NLP technology can be used to detect fake news and reveals the challenges and opportunities it presents.
Abstract: With the current increase in social media usage, everyone is very concerned about the spread of misleading information. Misinformation has been employed to sway public opinion, impact the 2016 US Presidential Election, and disseminate animosity and turmoil, such the genocide against the Rohingya people. A 2018 MIT study found that on Twitter, bogus news spreads six times faster than real news. In addition, there is now a problem in the news media's reliability and credibility. It is getting harder and harder to tell the difference between the morphed and true news, and for the evaluation of this study, a combination of various machine learning techniques, methods along with the natural language processing (NLP), LSTM, and passive aggressive classifier (PAC), to distinguish between bogus and authentic news and these two have shown to be the most successful machine learning models, despite the availability of many others.
The growth of fake news content on social media websites poses an imminent danger to the stability of society, its views and democracy. Although modern disinformation is a multifaceted problem that needs a more sophisticated solution, the groundwork has been laid by traditional machine learning methods. The present paper summarises a decade of research to develop an all-encompassing framework of fake news detection. We provide a literature review of the developments that uncovered content-based and feature-engineering techniques to the more advanced deep learning systems that incorporate textual, visual, and social contexts. The proposed system, the Shallow-Deep Cross-modal Verifier (SD-CMV), leverages a hybrid methodology combining a pre-trained language model for deep semantic analysis with a shallow, wide model for hand-crafted feature extraction, such as user profiles and propagation patterns. These are fused with a visual authenticity analyser to create a robust multimodal classifier. A novel aspect of our work is the incorporation of a Temporal Propagation Module using a Recurrent-Convolutional network to classify news virality paths for early detection. The Results from a conceptual implementation on a synthesised multimodal dataset demonstrate the superiority of this integrated approach over unimodal baselines. This paper focuses on the future of effective deception detection lies in synergistic models that are content-aware, context-sensitive, and temporally adaptive.
seek to influence and polarize political topics through massive coordinated efforts. In the process, these efforts leave behind artifacts, which researchers have leveraged to analyze the tactics employed by disinformation campaigns after they are taken down. Coordination network analysis has proven helpful for learning about how disinformation campaigns operate; however, the usefulness of these forensic tools as a detection mechanism is still an open question. In this paper, we explore the use of coordination network analysis to generate features for distinguishing the activity of a disinformation campaign from legitimate Twitter activity. Doing so would provide more evidence to human analysts as they consider takedowns. We create a time series of daily coordination networks for both Twitter disinformation campaigns and legitimate Twitter communities, and train a binary classifier based on statistical features extracted from these networks. Our results show that the classifier can predict future coordinated activity of known disinformation campaigns with high accuracy (F1 =0.98). On the more challenging task of out-of-distribution activity classification, the performance drops yet is still promising (F1= 0.71), mainly due to an increase in the false positive rate. By doing this analysis, we show that while coordination patterns could be useful for providing evidence of disinformation activity, further investigation is needed to improve upon this method before deployment at scale.
This article discusses five challenges in detection and mitigation of disinformation on social media platforms. We discuss the limitations of fact-checking, the main mitigation strategy currently in place, against influence operations that leverage the low persistence and high ephemerality of social media poststo move from one contentious and unverified frame to the next before fact-checking mechanismscan correctfalse information. We argue that fact-checking, a tool originally devised to evaluate political claims and hold politicians to account, can rarelymeet the scale, speed, velocity, and magnitude of mis-anddisinformation on social media.We also argue that the conflicting priorities of privacyand safety championed by policymakers rendered social media platforms increasingly more opaqueand paradoxically less accountable. We close with an assessment that mitigation strategies available to the academic community are severely limited, and that independent source attribution is near impossible in the wake of data access lockdowns.
Significance Hostile influence operations (IOs) that weaponize digital communications and social media pose a rising threat to open democracies. This paper presents a system framework to automate detection of disinformation narratives, networks, and influential actors. The framework integrates natural language processing, machine learning, graph analytics, and network causal inference to quantify the impact of individual actors in spreading the IO narrative. We present a classifier that detects reported IO accounts with 96% precision, 79% recall, and 96% AUPRC, demonstrated on real social media data collected for the 2017 French presidential election and known IO accounts disclosed by Twitter. Our system also discovers salient network communities and high-impact accounts that are independently corroborated by US Congressional reports and investigative journalism. The weaponization of digital communications and social media to conduct disinformation campaigns at immense scale, speed, and reach presents new challenges to identify and counter hostile influence operations (IOs). This paper presents an end-to-end framework to automate detection of disinformation narratives, networks, and influential actors. The framework integrates natural language processing, machine learning, graph analytics, and a network causal inference approach to quantify the impact of individual actors in spreading IO narratives. We demonstrate its capability on real-world hostile IO campaigns with Twitter datasets collected during the 2017 French presidential elections and known IO accounts disclosed by Twitter over a broad range of IO campaigns (May 2007 to February 2020), over 50,000 accounts, 17 countries, and different account types including both trolls and bots. Our system detects IO accounts with 96% precision, 79% recall, and 96% area-under-the precision-recall (P-R) curve; maps out salient network communities; and discovers high-impact accounts that escape the lens of traditional impact statistics based on activity counts and network centrality. Results are corroborated with independent sources of known IO accounts from US Congressional reports, investigative journalism, and IO datasets provided by Twitter.
A lot of people are concerned about DeepFakes in modern society. Despite its wide range of uses, DeepFakes has gotten little public recognition. The main goal of this research is to analyze DeepFakes and their originators, as well as their potential and risks. We analyzed 203 news articles from 16 media outlets in Bangladesh, India, and Pakistan to achieve our goal. The extracted news had been categorized under threat, prevention and entertainment centric news. It has been revealed after analyzing DeepFake related news from the leading English daily of these countries that more than 50% news of Pakistani newspapers related to DeepFake was on the threat of this heinous technology. On the other hand, one third news of Indian and Bangladeshi newspapers was on this regard. The widespread broadcast of misleading information through media outlets might boost their legitimacy and reception for a short time but slowly and steadily smear their good name. This study also highlights the significant role media professionals have in spreading disinformation about the people and topics they cover.
No abstract available
Abstract Deepfake videos threaten to spread false information by depicting events that never happened, such as politicians making statements they never actually made. This research tested the effectiveness of accuracy labels to help people remember the difference between actual and deepfake videos of former US President Joe Biden. People accurately recalled 93.8% of deepfake videos and 84.2% of actual videos, suggesting that labeling videos can help users accurately recall information. Individuals who identify as Republican and had lower favorability ratings of Biden performed better in distinguishing between actual and deepfake videos. We use the elaboration likelihood model (ELM) as a theoretical framework to explain how distrusting Biden as a message source caused audiences to more critically evaluate messages. These findings support the practice of media companies labeling deepfake content to help users identify and recall information accurately.
In this era led by Generative AI, any content you see on the internet cannot be trusted for authenticity, a disinformation video spreading like wildfire amongst the masses can do much greater damage than we can imagine, manipulate public opinion, agitate violence, ruin livelihoods, and endorse communal hate. As a defense mechanism usage of remote Photoplethysmography (rPPG), which is capable of capturing cardiovascular data, extract heartbeat signals to temporal PPG maps which when trained on Convolutional Neural Network helps us derive features and patterns observed in different methodologies followed to create deepfakes. Helping not only to detect the particular fake but also to get into the roots of it, making it simpler to detect the source and take necessary actions to catch the perpetrator. New day and age demand strict Law Regulations to prevent disinformation and derogatory deepfakes. This model can help clearly Distinguish Real and Fake apart.
No abstract available
Deepfakes today represent a novel threat that can induce widespread distrust more effectively than traditional disinformation due to its potential for greater susceptibility. In this study, we specifically test how individuals' exposure to deepfakes related to public infrastructure failures is linked to distrust in government, with their cognitive reflection and education possibly acting as a buffer. Using experimental data from the United States and Singapore, our findings indicate that exposure to deepfakes depicting a localized infrastructure failure, i.e., the collapse of a public bridge, heightens distrust in government among American participants but not Singaporeans. Additionally, education was found to be a significant moderator such that higher education levels is associated with lower political distrust when exposed to deepfakes. The role of deepfakes in influencing distrust in the government and the broader implications of these findings are discussed.
Artificial Intelligence-Generated Content (AIGC) is rapidly transforming the landscape of information dissemination while exacerbating the spread of fake news. This paper examines the mechanisms of AI-generated fake news, the development and societal impact of deepfake technology, and the role of AI in political manipulation and its threats to democratic institutions. The study highlights that AI-generated fake news spreads at an unprecedented speed and scale, exhibits high authenticity, and contributes to social trust crises, political polarization, and economic and legal risks. Furthermore, the paper reviews current countermeasures against AI-generated misinformation, including deepfake detection technologies, automated fake news identification systems, and platform accountability. Based on existing legal and policy frameworks, this study explores how international collaboration among technology, policy, and society can effectively address AI-generated disinformation. Finally, future research directions are proposed, including the application of quantum computing and trusted computing in fake news governance, the ongoing arms race between AI forgery and counter-forgery technologies, and strategies to enhance public digital resilience.
The integrity of global elections is increasingly under threat from artificial intelligence (AI) technologies. As AI continues to permeate various aspects of society, its influence on political processes and elections has become a critical area of concern. This is because AI language models are far from neutral or objective; they inherit biases from their training data and the individuals who design and utilize them, which can sway voter decisions and affect global elections and democracy. In this research paper, we explore how AI can directly impact election outcomes through various techniques. These include the use of generative AI for disseminating false political information, favoring certain parties over others, and creating fake narratives, content, images, videos, and voice clones to undermine opposition. We highlight how AI threats can influence voter behavior and election outcomes, focusing on critical areas, including political polarization, deepfakes, disinformation, propaganda, and biased campaigns. In response to these challenges, we propose a Blockchain-based Deepfake Authenticity Verification Framework (B-DAVF) designed to detect and authenticate deepfake content in real time. It leverages the transparency of blockchain technology to reinforce electoral integrity. Finally, we also propose comprehensive countermeasures, including enhanced legislation, technological solutions, and public education initiatives, to mitigate the risks associated with AI in electoral contexts, proactively safeguard democracy, and promote fair elections.
Deepfake technology is advancing rapidly and poses a range of cybersecurity concerns. Deepfakes have been used to perpetrate elaborate financial frauds. There is also the concern of deepfakes being used to influence elections. Deepfakes can fabricate statements or actions by public figures, influencing elections, public opinion, or policy decisions or simply to amplify disinformation. Adversaries can use deepfakes to spread propaganda or misinformation, destabilizing political or military scenarios. As deepfakes become more prevalent, individuals may begin to doubt authentic content, creating a "reality apathy" where distinguishing truth from fiction becomes difficult.
Like other emerging technologies, deepfakes present both risks and benefits to society. Due to harmful applications such as disinformation and non-consensual pornography, calls for their regulation have increased recently. However, little is known about public support for deepfake regulation and the factors related to it. This study addresses this gap through a pre-registered online survey (n = 1,361) conducted in Switzerland, where citizens can influence political regulation through direct democratic instruments, such as referendums. Our findings reveal a strong third-person perception, as people believe that deepfakes affect others more than themselves (Cohen’s d = 0.77). This presumed effect on others is a weak but significant predictor of support for regulation (β = 0.07). However, we do not find evidence for the second-person effect – the idea that individuals who perceive deepfakes as highly influential on both themselves and others are more likely to support regulation. However, an exploratory analysis indicates a potential second-person effect among females, who are specifically affected by deepfakes; a result which must be further explored and replicated. Additionally, we find that higher perceived risk and greater trust in institutions are positively associated with support for deepfake regulation.
As the advent of deepfake technology blurs the line between fact and fiction, a sophisticated new threat has been gaining ground in the digital age. How is AI-generated media being weaponized to spread disinformation and influence public perception? And can we still trust what we see online?
The article is devoted to the analysis of such phenomenon as deepfake in the context of its application in the political sphere from the point of view of manipulation of society. First of all, the authors reveal the essence of the phenomenon of disinformation by conducting a brief historical retrospective. Then they define the concept of “deepfake” and explain how this technology works from both technical and cognitive points of view. In the course of analyzing some illustrative cases, they propose the most affordable ways for the average user to detect deepfakes today. In the end, they conclude that due to the rapid technical progress and the resulting increase in generated content, the outlook for the information space is quite bleak. This is due to the fact that every day artificially created material becomes more and more indistinguishable from the genuine, and in the foreseeable future, a person will inevitably face the impossibility of detecting the fake without special technical means, which can fundamentally undermine the trust of citizens even to official news sources.
Deepfake technology has developed rapidly and has had a significant impact in various fields, including politics and national security. This study analyzes the legal implications of the use of deepfakes in politics in Indonesia, highlighting regulatory challenges, the effectiveness of technology detection, and its impact on public opinion. This study uses a mixed methods approach, combining literature analysis, case studies, interviews with legal and technology experts, and social media data analysis. The results show that the spread of deepfakes has increased sharply ahead of the 2024 General Election, especially on Twitter, Facebook, and TikTok, contributing to disinformation and public polarization. Existing regulations, such as the Electronic Information and Transactions Law (UU ITE) and the Criminal Code (KUHP), do not specifically regulate deepfakes, thus creating difficulties in law enforcement. In addition, existing detection technologies still face challenges in identifying increasingly sophisticated deepfake content. As a mitigation measure, this study recommends the formation of special regulations regarding deepfakes, improving detection technology through collaboration with the technology sector, increasing the capacity of law enforcement in digital forensics, and public education to improve digital literacy. It is hoped that these steps can reduce the negative impact of deepfake on democracy and national security stability in Indonesia.
This study examines the legal aspects of the use of deepfake technology in political campaigns and its implications for the integrity of democracy. Through a qualitative approach with normative legal analysis methods, this study maps regulations related to deepfake in various countries, analyzes case studies of the use of deepfake in political campaigns, and evaluates the effectiveness of regulations in Indonesia. The results show that regulations on deepfake in political campaigns vary significantly between countries, with some countries such as the United States (at the state level) and the European Union having implemented specific regulations, while Indonesia still relies on the Electronic Information and Transactions Law (UU ITE) which does not specifically regulate deepfake. Case study analysis reveals the use of deepfake for positive purposes such as translating political speeches (India) and negative purposes such as spreading disinformation (United States, Russia, Philippines). The main challenges to regulation in Indonesia include the lack of public awareness, limited detection technology, the speed of information dissemination on social media, and the absence of specific sanctions for the misuse of deepfake in a political context. Based on these findings, this study recommends the establishment of specific regulations related to deepfake, strengthening multi-party collaboration in developing detection systems, increasing public education through digital literacy campaigns, and implementing strict law enforcement with specific sanctions. A synergy between comprehensive regulation, advanced detection technology, and high public awareness is needed to mitigate the negative impact of deepfakes on democratic integrity and ensure that this technology is used ethically in political communication.
Deepfakes, a form of synthetic media created using deep learning and AI, enable manipulating audio, video, or images to produce highly realistic yet fake content. These are typically generated using neural networks like generative adversarial networks (GANs) or autoencoders, which analyze existing data patterns, such as photos or videos of individuals, to replicate facial expressions, speech, and other characteristics. While deepfake technology has genuine uses in entertainment, its misuse poses serious threats, including spreading disinformation, fabricating news, and producing explicit or defamatory content without consent. In countries like India and the UK, the misuse of deepfakes has emphasized the need for legal frameworks that address privacy, data protection, and cybercrime risks. Although existing laws, such as India’s Information Technology Act, Indian Penal Code, and Bhartiya Nyaya Sanhita, cover certain aspects, they lack specific provisions for deepfakes. The issue has gained significant attention with notable cases of deepfakes targeting public figures and celebrities. This technology’s rapid development challenges data security, privacy, and intellectual property rights, raising concerns about political manipulation, identity theft, and defamation. While progressing, current detection technologies are still limited in effectively identifying deepfakes and suggesting measures; this study emphasizes the importance of legal reforms, proposing amendments to existing legislation, and creating new laws explicitly targeting deepfakes. Additionally, it advocates for advanced detection tools to help mitigate these risks. By combining legal and technical approaches, the study suggests that countries collaborate internationally to minimize the harmful impacts of deepfakes, establishing a robust regulatory environment that protects individuals and institutions from this growing cyber threat.
This article analyzes the use of different types of deepfakes in electoral processes in Ecuador, from the end of 2023 to April 2025. It focuses on the prominent role of these audiovisual resources in spreading disinformation in digital environments, specifically on the X network. The objective of this work is to analyze the polarization and the emotional and political impact caused by these publications, through the analysis of 20 deepfakes that went viral on X and were reported as false, during the last two presidential campaigns. The methodology applied is mixed. With the support of the Grok tool, sentiment analysis, thematic coding, and metrics of reach and virality of these publications were carried out. The results revealed a pattern of media manipulation that generated outrage, fear, and rejection, which deepened polarization and affected public trust in the democratic process. This study demonstrates the power of deepfakes to distort and influence the electorate’s perceptions and the threat they pose to the integrity of public debate and democracy in countries with very marked political tendencies such as Ecuador, where greater digital literacy, content verification, and ethical regulation of the use of artificial intelligence are needed.
Was the 2023 Slovakia election the first swung by deepfakes? Did the victory of a pro-Russian candidate, following the release of a deepfake allegedly depicting election fraud, herald a new era of disinformation? Our analysis of the so-called “Slovak case” complicates this narrative, highlighting critical factors that made the electorate particularly susceptible to pro-Russian disinformation. Moving beyond the deepfake’s impact on the election outcome, this case raises important yet under-researched questions regarding the growing use of encrypted messaging applications in influence operations, misinformation effects in low-trust environments, and politicians’ role in amplifying misinformation––including deepfakes.
This paper intends to scrutinise the anti-feminist leanings and misogynist outlook of the Hindutva ideology prevalent in the current socio-political and religious scenario of India. Gendered and sexualised disinformation and virtual violence are used as a prominent tool to attack the autonomy of women’s bodies by labelling them as sexually and socially immoral and thus further disapproving the credibility of their political opinions. Through specific case analysis, the paper explores the ongoing procedure to mute women in the digital space by the Hindutva ideologues.
The subject of this text is the issue of the use of deepfake technology during the parliamentary campaign in Poland in 2023. The aim of the article is to present the ways of using deepfake technology during the ongoing campaign, as well as to try to answer the question about the purposefulness and effectiveness of these activities. Initial key findings allow us to conclude that several deepfakes appeared in the virtual space, which were created for satirical, disinformation and depreciatory purposes. However, due to quick actions, among others, journalists and specialists in the field of fake news, the disinformation potential of such messages was minimized.
This research evaluates the political and socioeconomic factors that allow deepfake technology growth and spread within Pakistan by analyzing their effects on public vulnerability to digital disinformation. The increasing availability of deepfakes transforms information reception methods as well as the methods politicians use to frame narratives, since deepfakes now appear realistic. The analysis requires attention to three main elements: insufficient digital literacy, political and ethnic rivalry, and online media habits that support the spread of biased content through social media echo chambers. Deepfake content finds success in the politically segregated cultural climate of Pakistan, which confuses viewers between genuine and fake elements and information versus deliberate distortion. The research explains that deepfakes have a strong impact because of the combination of technological advances and existing societal weaknesses and political distrust. This paper asserts that enhanced media literacy, along with institutional safeguards and better public awareness, will serve as key elements for fighting deepfake deception that affects Pakistan's democratic communication methods.
Deepfakes dominate discussions about manipulated videos, but other forms of visual disinformation are more prevalent and less understood. Moreover, deception is often assessed through measuring credibility, overlooking cognitive effects like misperceptions and attitude changes. To address these gaps, an online experiment (N = 802) examined visual disinformation’s effects on credibility, misperceptions, and perceptions of a politician. The study compared a deepfake (machine learning manipulation), a cheapfake (rudimentary manipulation), and a decontextualized video (false context), all portraying the same politician and false message. Despite low in credibility, the deepfake and cheapfake caused a misperception, with the deepfake harming perceptions of the politician.
In an era in which digital technologies profoundly shape access to and dissemination of information, disinformation is one of the most insidious threats to democratic societies. This contribution aims to analyse the contemporary mechanisms of creation and dissemination of disinformation online, with a particular focus on the risks arising from social media, the current functioning of their algorithms and deepfake technology. The article examines the strategies through which state and non-state actors use manipulated content to shape public opinion, erode trust in institutions, polarize public discourse and influence electoral processes. Through an interdisciplinary approach, the text offers an overview of the new forms of digital propaganda and their historical evolution, underlining the continuity between ancient practices of manipulation and modern disinformation technologies. Particular attention is paid to the concept of parasocial opinion leaders, a central figure in new digital ecosystems, and to the role of algorithms in reinforcing cognitive biases and radicalization phenomena. Finally, some reflections are offered on tools and strategies to combat disinformation, from the regulation of platforms to the strengthening of media education, with the aim of promoting critical and resilient citizenship. The article thus intends to contribute to the academic debate on disinformation as a geopolitical, social and epistemological challenge, proposing a theoretical framework useful for further research and effective policies.
In this paper, the author will analyze the role of digital propaganda and disinformation generated via artificial intelligence (AI) in the ongoing Middle Eastern conflicts, with a focus on the escalation of conflict between Israel and Palestine, as well as the overthrow of the President of the Syrian Republic Bashar al-Assad. In this geographical framework, the author will examine how technologies based on AI, and primarily the generation of deepfake materials, have a use value in conflict zones, tracing the first steps of use of such tactics for war purposes in the ongoing Russia-Ukraine conflict. By confirming the presence and influence of this type of cyber operations through the analysis of two case studies, the author will confirm the hypothesis, which states that contemporary digital propaganda, aided and generated via AI, significantly complicates the course of ongoing events, thus leading to inevitable conflict escalation, making it difficult to distinguish the truth from lies, in both military and political context. With the use of the content analysis method, focused on the identification of instances of placement of AI-generated digital propaganda, this paper will present a comprehensive analysis and study of individual cases of Israel/Palestine and the Syrian Republic to create a framework of possibilities and influence of such content on classical armed conflicts, as well as on political crises in the Middle East.
Political deepfakes are considered detrimental to democracy by eroding public trust and distorting communication. Scholars have advocated for inoculation strategies to counter deepfakes, yet they have found that individuals’ partisan attitudes can undermine the effects of inoculation. Guided by inoculation theory and motivated reasoning theory, we conducted a 3 (Inoculation Mode: Passive vs. Active vs. No Inoculation) × 2 (Deepfake Attack: Pro-Attitudinal vs. Counter-Attitudinal) between-subjects experiment. Results show that inoculation increases deepfake awareness, intention to debunk deepfakes, and information-seeking behaviors, while reducing the perceived credibility of deepfake messages. However, exposure to counter-attitudinal deepfakes led to greater agreement with embedded disinformation.
State-sponsored disinformation campaigns threaten democratic resilience, public trust, and geopolitical stability. Foreign actors, particularly Russia, China, and Iran, exploit digital platforms to manipulate narratives and influence public perception. This research employs a dual-approach framework, integrating qualitative thematic analysis with AI-driven computational methods to detect, classify, and track disinformation narratives. While traditional qualitative analysis provides valuable insights, it lacks scalability. To address this, we enhance human analysis with AI-based methodologies, including natural language processing (NLP), machine learning (ML), and network analysis techniques. Using BERT, Random Forest, and Latent Dirichlet Allocation (LDA), we classify disinformation posts, quantify their prevalence, and track narrative evolution. Our findings reveal that foreign actors adapt dynamically, leveraging deepfake technologies and bot-driven amplification. By combining AI-driven analytics with expert-driven qualitative research, this research presents a scalable detection framework, providing actionable intelligence for policymakers, regulators, and fact-checkers to mitigate the risks posed by foreign influence operations.
Disinformation is a major threat in the digital age, affecting politics, society, and the economy. Unlike misinformation, which is unintentional, disinformation is deliberately created to manipulate public opinion, destabilize governments, and influence elections. The rise of social media has amplified its impact, making false information spread rapidly without verification. Historically, disinformation has been used for political gain, from the forged Donation of Constantine to wartime propaganda and modern deepfake technology. Today, digital platforms, bots, and microtargeting strategies have transformed disinformation into a powerful tool for influencing public perception. A striking example is the case of Călin Georgescu in Romania. A nationalist and Eurosceptic politician, C. Georgescu gained popularity through social media, particularly TikTok, with anti-establishment rhetoric. However, investigations revealed coordinated disinformation campaigns, likely supported by Russia, to manipulate public opinion and destabilize Romania’s political landscape. These efforts included deepfake videos discrediting opponents, fake news articles, manipulated social media trends, and cyberattacks targeting independent media. As a result, Romania’s Constitutional Court annulled the first round of presidential elections, highlighting the dangers of digital interference in democracy. The European Commission launched an inquiry into TikTok for potentially violating the Digital Services Act by allowing manipulated content to spread. This case underscores the urgent need for stricter regulations on digital platforms, better fact-checking tools, and media literacy programs to counteract disinformation. A coordinated effort between governments, tech companies, and civil society is essential to protect democracy and ensure the integrity of public discourse.
The advent of Artificial Intelligence (AI) poses new hurdles to democracy, especially in the form of political deepfakes and disinformation during Indonesia's 2024 general elections. This study evaluates the effectiveness of national legal regulations on AI-generated manipulative content and aims to identify legal gaps that undermine the state's democratic integrity. Using a case-study qualitative approach and a normative legal method, data were collected from regulatory studies, examination of deepfake cases, and interviews with experts. Findings show that there are no specific laws or regulations governing political deepfakes: of the four cases analyzed, only one has been acted on by regulatory authorities. Such findings imply a weak legal defence against digital manipulation and further erode public trust in the electoral process. The study's major contribution is the development of a new adaptive regulatory framework that advocates integrating law and technology to ensure the integrity of democracy. The research builds on the literature on digital law. It provides concrete policy recommendations to enhance regulatory mechanisms and institutional capacity to stamp out efforts to address AI-induced disinformation threats. Beyond theoretical contribution, this study offers practical policy guidance for lawmakers, electoral regulators, and civil society in designing adaptive legal frameworks that safeguard democratic integrity amid the growing risks of AI-driven manipulation
The phenomenon of DeepFake has rapidly expanded, primarily driven by advancements in artificial intelligence technologies. Motivated by the growing presence of deceptive audio-visual content in political communication, this study investigates the various forms, distribution mechanisms, and societal impacts of DeepFakes, with a focus on the Romanian context. The paper contributes by classifying DeepFake content into distinct categories, analyzing over 400 misleading media materials, and identifying the technological, communicational, and intentional premises that enable such fabrications. Key findings reveal the strategic use of DeepFakes in political manipulation, disinformation campaigns, and financial fraud, with serious consequences for public trust and information integrity. The study underscores the urgent need for targeted regulations, media literacy, and platform accountability to mitigate the adverse effects of synthetic media in democratic societies.
AI-engendered synthetic media that persuasively manipulate audio, video, or images are known as Deepfakes. Although primarily intended for entertainment, their misapplication has raised grave legal, political, and cybersecurity apprehensions. This paper concentrates on Deepfakes' legal and technological challenges, highlighting Indian laws and worldwide comparisons. We will consider the crimes mainly associated with Deepfakes, evaluate the currently authorized outlines, and assess detection or exposure mechanisms and platform responses. Additionally, this study examines the malicious use of underground deepfake tools and AI-driven cybercrime tactics that amplify security risks. To discuss prevailing breaches, we will discuss diverse frameworks to strengthen organizational defenses against deepfake threats. This paper will emphasize the necessity for efficient protocols and robust detection devices. This will provide a clear perception for policymakers, legal experts, cybersecurity professionals, and persons who regulate digital platforms and help them grasp and alleviate the impact of deepfakes.
Abstract: Our research examines the tremendous effects of deepfake technology on politics, culture, and ethics in the modern day. This investigation is comparable to the spread of false information. Our research aims to thoroughly evaluate the impact of deepfake technology. We explore its moral ramifications, the possibility that it may sway political debate, and its larger societal repercussions. We undertake data analysis using open-source tools like Python and Power BI, producing a range of visual representations including charts and Word Clouds. We create and carry out a unique survey to track the penetration of viral deepfake material in various countries. We carefully consider variables like the kinds of deepfake disinformation, the reasons for their fabrication, and the channels through which they are disseminated. In addition to circumstances analogous to the setting of deepfake deception, our study offers useful insights that can guide the development of mitigation solutions for disinformation issues across a variety of areas. Keywords—Deepfake Technology, Disinformation, survey analytics, Fake Content, Misinformation, Deception
The article addresses the issue of the use of images generated with the use of artificial intelligence as part of disinformation in the first year of the Russian invasion of Ukraine (24.02.2022–24.02.2023). Based on the review of the literature, reports and media coverage in Polish, English, Ukrainian and Russian, examples of the use of deepfake in Russian disinformation were highlighted and how such technology was used and what was its significance in a theoretical context.
The article describes the basic foundations and significance of the manipulative technique called Deepfake, which in the environment of technological and informational expansion is also becoming a widely used tool for spreading propaganda. This advanced manipulation complements a wider spectrum of forms of disinformation and is increasingly being used as a means of conducting information operations, often as part of wider hybrid warfare. Effectively combating this kind of manipulation places high demands on consumers of information, both on the part of the detection tools used and on the part of the cognitive human approach based on critical thinking. The expansion and sophistication of similar manipulative techniques will continue, in connection with the development of modern technologies and the interconnectedness of the information environment. Although the Deepfake technique is not only associated with security-military aspects, its influence on information operations and hybrid warfare cannot be neglected.
The malicious use of deepfake technology can lead to violations of human rights and freedoms, or even facilitate criminal activities such as financial fraud. However, creating manipulated images can also pose other threats, including those to democratic states and the principles that govern them. The upcoming presidential elections in the United States and the recent parliamentary elections in European and non-European countries have delivered an impulse for a discussion on the impact that deepfake can have on elections, on the ethics of holding elections and on the principles of democracy, on how countries fight these threats, and on how sufficient and effective the implemented methods really are.
In the summer of 2023, the Writers Guild of America embarked on what would become one of its longest strikes in history. Concurrently, the early stirrings of the presidential campaign saw several ads circulating with convincingly altered video and audio clips of political rivals. Though at first glance unrelated, these events share a common thread: the issue of deepfakes and their potential for spreading disinformation and erasing creative jobs. While deepfakes stirred a sizable debate in both cases, the scale and accessibility of their threat were unclear. What were the limiting factors for using this technology? Was it exclusive to Hollywood studios with large training sets, or was it accessible to an average programmer? We conducted a set of experiments to answer these questions. In particular, set out to create a photorealistic deepfake of a real news anchor using only open-source tools and models, limited data from the internet, and a consumer laptop. Over a few weeks—as a team comprising one first-year computer science student and his advisor—we accomplished this to the extent that our deepfake opened a primetime CNN show. Contextualizing our findings in the landscape of disinformation, this talk details the development of our deepfake pipeline from start to end. It offers a discussion highlighting this technology’s current ability to deceive and shake industries and suggests potential solutions moving forward.
The proliferation of fake news poses a severe threat to information integrity and societal stability, particularly evident in Nigeria's political landscape. Addressing this multifaceted challenge requires a comprehensive approach. Leveraging advanced technological solutions, fostering media literacy through educational initiatives, and promoting collaboration between digital platforms, fact-checkers, and governments are crucial. Transparency in algorithms, accountability for content producers, and international cooperation can enhance countermeasures. Targeted regulations for deepfake content and continuous research efforts are essential. By combining these strategies, societies can mitigate the impact of fake news and cultivate a more informed and resilient public discourse.
Trust has become a first-order concept in AI, urging experts to call for measures ensuring AI is ‘trustworthy’. The danger of untrustworthy AI often culminates with Deepfake, perceived as unprecedented threat for democracies and online trust, through its potential to back sophisticated disinformation campaigns. Little work has, however, been dedicated to the examination of the concept of trust, what undermines the arguments supporting such initiatives. By investigating the concept of trust and its evolutions, this paper ultimately defends a non-intuitive position: Deepfake is not only incapable of contributing to such an end, but also offers a unique opportunity to transition towards a framework of social trust better suited for the challenges entailed by the digital age. Discussing the dilemmas traditional societies had to overcome to establish social trust and the evolution of their solution across modernity, I come to reject rational choice theories to model trust and to distinguish an ‘instrumental rationality’ and a ‘social rationality’. This allows me to refute the argument which holds Deepfake to be a threat to online trust. In contrast, I argue that Deepfake may even support a transition from instrumental to social rationality, better suited for making decisions in the digital age.
Social media platforms offer unprecedented opportunities for connectivity and exchange of ideas; however, they also serve as fertile grounds for the dissemination of disinformation. Over the years, there has been a rise in state-sponsored campaigns aiming to spread disinformation and sway public opinion on sensitive topics through designated accounts, known as troll accounts. Past works on detecting accounts belonging to state-backed operations focus on a single campaign. While campaign-specific detection techniques are easier to build, there is no work done on developing systems that are campaign-agnostic and offer generalized detection of troll accounts unaffected by the biases of the specific campaign they belong to. In this paper, we identify several strategies adopted across different state actors and present a system that leverages them to detect accounts from previously unseen campaigns. We study 19 state-sponsored disinformation campaigns that took place on Twitter, originating from various countries. The strategies include sending automated messages through popular scheduling services, retweeting and sharing selective content and using fake versions of verified applications for pushing content. By translating these traits into a feature set, we build a machine-learning-based classifier that can correctly identify up to 94% of accounts from unseen campaigns. Additionally, we run our system in the wild and find more accounts that could potentially belong to state-backed operations. We also present case studies to highlight the similarity between the accounts found by our system and those identified by Twitter.
Abstract This article examines the circulation of a military-led disinformation campaign against civilians leading the pro-democracy movement in Sudan. We examine the political communication of military leaders in Sudan after the June 3 massacre when the state open-fired at the protestors in Khartoum and later declared an Internet shutdown. Our primary thesis is that a state-sponsored Internet shutdown generates a communicative environment conducive to disseminating disinformation created by the state (here, military) to justify their violence and junta rule in the country. Insights from this case study also demonstrate how autocratic states impose Internet shutdowns to disable regional media and circulate disinformation against dissenting voices. Unlike most literature contextualised in fully functioning democracies of the global North, our article offers a glimpse into the evolving forms of disinformation in transitioning democracies under autocratic regimes. Our findings provide theoretical provocations to explore the workings of the conventional forms of control in a digitally mediated and autocratic society.
State-sponsored “bad actors” increasingly weaponize social media platforms to launch cyberattacks and disinformation campaigns during elections. Social media companies, due to their rapid growth and scale, struggle to prevent the weaponization of their platforms. This study conducts an automated spear phishing and disinformation campaign on Twitter ahead of the 2018 United States midterm elections. A fake news bot account — the @DCNewsReport — was created and programmed to automatically send customized tweets with a “breaking news” link to 138 Twitter users, before being restricted by Twitter.Overall, one in five users clicked the link, which could have potentially led to the downloading of ransomware or the theft of private information. However, the link in this experiment was non-malicious and redirected users to a Google Forms survey. In predicting users’ likelihood to click the link on Twitter, no statistically significant differences were observed between right-wing and left-wing partisans, or between Web users and mobile users. The findings signal that politically expressive Americans on Twitter, regardless of their party preferences or the devices they use to access the platform, are at risk of being spear phished on social media.
ABSTRACT The 2022 ‘Women, Life, Freedom’ protests in Iran have led to the escalation of state-sponsored online disinformation campaigns. This paper aims to examine how, amidst a growing legitimacy crisis, the Iranian regime has employed a ‘Big Lie’ to shatter hopes for change by discrediting influential dissidents and hindering the formation of an effective opposition movement. Three target groups have borne the brunt of this strategy: celebrities, political dissidents inside the country, and prominent opponents in the diaspora. By reviewing state-owned media content and tweets, this paper reveals a consistent pattern of character assassination against dissidents. The ruling regime’s ultimate goal is to foster a sense of public hopelessness for an alternative to the Islamic Republic. By conceptualizing the Big Lie online, the study engages with the mechanism of control in modern despotism in the age of the internet and social media.
ABSTRACT In this paper, we empirically examine the explanations for the frequency of Russian disinformation attacks on countries in Europe or the countries that were formerly part of the Soviet Union from 2015 to 2021. Using negative binomial regression analysis, we find that disinformation attacks are most frequent when (1) a country was holding a national election in that year, and (2) if that country was experiencing significant political unrest. These findings demonstrate that Russian disinformation is primarily motivated by an interest in influencing election results and promoting domestic unrest in target countries.
Social media has emerged as a key arena for statesponsored disinformation campaigns, where coordinated troll accounts disseminate false narratives and manipulate public discourse. While existing research has primarily focused on detecting such troll accounts, this paper introduces the novel concept of Troll Attribution, drawing on principles from cyber threat attribution. We propose TrollSleuth, a comprehensive framework for attributing troll activity to state sponsors by analyzing linguistic and behavioral fingerprints. Our method integrates four analytical modules-Social Engagement, Word Analysis, Emotion and Sentiment Analysis, and Temporal Activity and Client Utilization Analysis-to extract distinctive features from real-world Twitter data spanning four state-sponsored campaigns. The resulting model achieves a high F1-score of $\mathbf{9 5. 4 8 \%}$ in state-sponsor identification and incorporates featurebased explanations to enhance interpretability. These findings offer actionable insights for strategic intelligence, supporting the detection and deterrence of disinformation operations, informing legal and diplomatic responses, and reinforcing defenses against state-sponsored influence campaigns. The code used in this study is publicly available.11https://github.com/CyberScienceLab/Our-Papers/tree/main/TrollSleuth/
Over the past couple of years, anecdotal evidence has emerged linking coordinated campaigns by state-sponsored actors with efforts to manipulate public opinion on the Web, often around major political events, through dedicated accounts, or “trolls.” Although they are often involved in spreading disinformation on social media, there is little understanding of how these trolls operate, what type of content they disseminate, and most importantly their influence on the information ecosystem. In this paper, we shed light on these questions by analyzing 27K tweets posted by 1K Twitter users identified as having ties with Russia’s Internet Research Agency and thus likely state-sponsored trolls. We compare their behavior to a random set of Twitter users, finding interesting differences in terms of the content they disseminate, the evolution of their account, as well as their general behavior and use of Twitter. Then, using Hawkes Processes, we quantify the influence that trolls had on the dissemination of news on social platforms like Twitter, Reddit, and 4chan. Overall, our findings indicate that Russian trolls managed to stay active for long periods of time and to reach a substantial number of Twitter users with their tweets. When looking at their ability of spreading news content and making it viral, however, we find that their effect on social platforms was minor, with the significant exception of news published by the Russian state-sponsored news outlet RT (Russia Today).
Internet misinformation and government-sponsored disinformation campaigns have been criticized for their presumed/hypothesized role in worsening the coronavirus disease 2019 (COVID-19) pandemic. We hypothesize that these government-sponsored disinformation campaigns have been positively associated with infectious disease epidemics, including COVID-19, over the last two decades. By integrating global surveys from the Digital Society Project, Global Burden of Disease, and other data sources across 149 countries for the period 2001–2019, we examined the association between government-sponsored disinformation and the spread of respiratory infections before the COVID-19 outbreak. Then, building on those results, we applied a negative binomial regression model to estimate the associations between government-sponsored disinformation and the confirmed cases and deaths related to COVID-19 during the first 300 days of the outbreak in each country and before vaccination began. After controlling for climatic, public health, socioeconomic, and political factors, we found that government-sponsored disinformation was significantly associated with the incidence and prevalence percentages of respiratory infections in susceptible populations during the period 2001–2019. The results also show that disinformation is significantly associated with the incidence rate ratio (IRR) of cases of COVID-19. The findings imply that governments may contain the damage associated with pandemics by ending their sponsorship of disinformation campaigns.
Disinformation and propaganda are recognized as an equal tool of Russia’s foreign policy and hybrid wars as other methods of coercion and intimidation. Ukraine was one of the main objects for the implementation of this method of the Kremlin’s foreign policy. At the same time, other Central and Eastern European states also are feeling informational pressure due to the spread of disinformation and narratives that were supposed to justify the actions of the Russian state and its aggression against individual states. Romania, which borders Ukraine and became a neighbor of the Russian Federation after the illegal annexation of the Crimean Peninsula, became one of the important objects of the Kremlin’s disinformation and propaganda. The article is devoted to the study of the peculiarities of the Russian disinformation campaign in Romania, which openly demonstrates its anti-Russian sentiments. The existing discourse on the narratives and misinformation of the Kremlin in the countries of Central and Eastern Europe emphasizes that in recent years, Russia has begun to change approaches in each country, using the peculiarities of the regional context and internal situation to achieve its own goals. The purpose of the article is to identify the main messages distributed by Russian resources in the Romanian information space in order to determine the exact goals pursued by the Kremlin in this country. The application of a combination of qualitative methodology methods, such as the case study method, content analysis of documents and scientific works, as well as discourse analysis of information resources, allows us to provide answers to the following search tasks: to reveal the peculiarities of the perception of Russia in Romania; identify the main sources of dissemination of pro Russian theses and information in the Romanian media; to analyze the narratives cultivated in the Romanian information space, with an interpretation of their real tasks and an assessment to what extent they can influence public opinion. The conclusions summarize the results of the study, indicating which are the direct and indirect Kremlin’s goals by spreading certain messages and narratives.
Internet shutdowns authorized by the state are becoming a recurring case in countries under military or authoritarian rule, such as Sudan. This article examines how the military in Sudan shut down the Internet to cover up the June 3 massacre. The shutdown made it difficult for the protestors and civilians to share and document the human rights violations committed by the state from June 3 to July 9, 2019. We also demonstrate how the Internet shutdowns were instrumental in circulating state-sponsored disinformation campaigns delegitimizing the protests. The article expands on existing literature to explain how information vacuums are conducive to the spread of disinformation and the weakening of on-ground protest movements. Despite the crippling effects of the Internet shutdown in Khartoum, our analysis illustrates how protestors challenged designed technical and physical workarounds to circumvent the shutdown.
State-sponsored influence operations (SIOs) have become a pervasive and complex challenge in the digital age, particularly on social media platforms where information spreads rapidly and with minimal oversight. These operations are strategically employed by nation-state actors to manipulate public opinion, exacerbate social divisions, and project geopolitical narratives, often through the dissemination of misleading or inflammatory content. Despite increasing awareness of their existence, the specific linguistic and emotional strategies employed by these campaigns remain underexplored. This study addresses this gap by conducting a comprehensive analysis of sentiment, emotional valence, and abusive language across 2 million tweets attributed to influence operations linked to China, Iran, and Russia, using Twitter’s publicly released dataset of state-affiliated accounts. We identify distinct affective and rhetorical patterns that characterize each nation’s digital propaganda. Russian campaigns predominantly deploy negative sentiment and toxic language to intensify polarization and destabilize discourse. In contrast, Iranian operations blend antagonistic and supportive tones to simultaneously incite conflict and foster ideological alignment. Chinese activities emphasize positive sentiment and emotionally neutral rhetoric to promote favorable narratives and subtly influence global perceptions. These findings reveal how state actors tailor their information warfare tactics to achieve specific geopolitical objectives through differentiated content strategies.
State sponsored information operations, or SSIOs, are a growing problem across many of the information spaces we inhabit online. These instances of coordinated misinformation and propaganda have been perpetrated by over 80 state actors in the last decade, and have been used to exert influence on digital media consumption habits, discussions of contentious issues, and even national elections. Concern over the power that SSIOs wield is only growing as the proliferation of automated tools and services is making it easier than ever to launch large-scale manipulation campaigns. But what role do such automated agents play within the broader operations that they are deployed in? Are they even successful at making an impact in information spaces online? In this work, we address both of these questions through the use of a sequence-based clustering method and advanced linear modeling. Using these methods, we investigate the relationship between agent automation, role, and network characteristics and how much success those agent's achieve over the course of their lifetimes. We find that automated agents perform worse across every success metric compared to human agents, and that they play a smaller, supporting role to the primarily human SSIO workforce. What's more, we find that the extent to which agent's engage in amplifying- or producing-centric roles is by far the biggest determinant of how successful they will be, highlighting the importance of social-roles in the analysis of automated agents.
State-sponsored information operations (IOs) increasingly influence global discourse on social media platforms, yet their emotional and rhetorical strategies remain inadequately characterized in scientific literature. This study presents the first comprehensive analysis of toxic language deployment within such campaigns, examining 56 million posts from over 42 thousand accounts linked to 18 distinct geopolitical entities on X/Twitter. Using Google’s Perspective API, we systematically detect and quantify six categories of toxic content and analyze their distribution across national origins, linguistic structures, and engagement metrics, providing essential information regarding the underlying patterns of such operations. Our findings reveal that while toxic content constitutes only 1.53% of all posts, they are associated with disproportionately high engagement and appear to be strategically deployed in specific geopolitical contexts. Notably, toxic content originating from Russian influence operations receives significantly higher user engagement compared to influence operations from any other country in our dataset. Our code is available at https://github.com/shafin191/Toxic_IO.
The detection of state-sponsored trolls operating in influence campaigns on social media is a critical and unsolved challenge for the research community, which has significant implications beyond the online realm. To address this challenge, we propose a new AI-based solution that identifies troll accounts solely through behavioral cues associated with their sequences of sharing activity, encompassing both their actions and the feedback they receive from others. Our approach does not incorporate any textual content shared and consists of two steps: First, we leverage an LSTM-based classifier to determine whether account sequences belong to a state-sponsored troll or an organic, legitimate user. Second, we employ the classified sequences to calculate a metric named the “Troll Score”, quantifying the degree to which an account exhibits troll-like behavior. To assess the effectiveness of our method, we examine its performance in the context of the 2016 Russian interference campaign during the U.S. Presidential election. Our experiments yield compelling results, demonstrating that our approach can identify account sequences with an AUC close to 99% and accurately differentiate between Russian trolls and organic users with an AUC of 91%. Notably, our behavioral-based approach holds a significant advantage in the ever-evolving landscape, where textual and linguistic properties can be easily mimicked by Large Language Models (LLMs): In contrast to existing language-based techniques, it relies on more challenging-to-replicate behavioral cues, ensuring greater resilience in identifying influence campaigns, especially given the potential increase in the usage of LLMs for generating inauthentic content. Finally, we assessed the generalizability of our solution to various entities driving different information operations and found promising results that will guide future research.
In 2016, Russia attempted to use social media to influence the outcome of the U.S. presidential election, highlighting the potential real-world impacts of state-led online misinformation campaigns. Misinformation on social media is a growing concern, especially in the areas of politics and medicine, given their impact not only at the individual level but also for society as a whole. In this article, we investigate the potential to automatically label and detect the polarity (positive, neutral, or negative) of Iranian state-sponsored propaganda tweets on the Iranian nuclear deal. The SentiWordNet lexicon is used to automatically assign a polarity label and an objectivity score to each tweet. Using the labels, five machine learning algorithms are used to create polarity detection models. The experimental results show that the best performing models correctly identify polarity in approximately 77% of the tweets.
As technology and interconnectivity increase globally, the opportunity to wage irregular warfare (competition) has become low-cost-low-risk, simple to wage, challenging to detect, and more difficult to deter and defend. Propaganda is the most common method of covert or overt influence operations used by state actors today and often employs the same technologies, channels, and market communication techniques that firms use in more benign pursuits. We propose a framework based on Consumer Vulnerability Theory to explain the effects of state-sponsored propaganda on citizens’ propensity to become vulnerable consumers by manipulating beliefs about the availability and control of government-provided resources. Vulnerability leads citizens to employ coping strategies that help achieve the propagandist’s goals. This perspective may inform policy and public education campaigns to deter and attenuate the harmful effects of state-sponsored propaganda on citizens. Research on moderators and mediators that reduce influence now become more salient.
This paper presents a new computational framework for mapping state-sponsored information operations into distinct strategic units. Utilizing a novel method called multi-view modularity clustering (MVMC), we identify groups of accounts engaged in distinct narrative and network information maneuvers. We then present an analytical pipeline to holistically determine their coordinated and complementary roles within the broader digital campaign. Applying our proposed methodology to disclosed Chinese state-sponsored accounts on Twitter, we discover an overarching operation to protect and manage Chinese international reputation by attacking individual adversaries (Guo Wengui) and collective threats (Hong Kong protestors), while also projecting national strength during global crisis (the COVID-19 pandemic). Psycholinguistic tools quantify variation in narrative maneuvers employing hateful and negative language against critics in contrast to communitarian and positive language to bolster national solidarity. Network analytics further distinguish how groups of accounts used network maneuvers to act as balanced operators, organized masqueraders, and egalitarian echo-chambers. Collectively, this work breaks methodological ground on the interdisciplinary application of unsupervised and multi-view methods for characterizing not just digital campaigns in particular, but also coordinated activity more generally. Moreover, our findings contribute substantive empirical insights around how state-sponsored information operations combine narrative and network maneuvers to achieve interlocking strategic objectives. This bears both theoretical and policy implications for platform regulation and understanding the evolving geopolitical significance of cyberspace.
In today's digital age, conspiracies and information campaigns can emerge rapidly and erode social and democratic cohesion. While recent deep learning approaches have made progress in modeling engagement through language and propagation models, they struggle with irregularly sampled data and early trajectory assessment. We present IC-Mamba, a novel state space model that forecasts social media engagement by modeling interval-censored data with integrated temporal embeddings. Our model excels at predicting engagement patterns within the crucial first 15-30 minutes of posting (RMSE 0.118-0.143), enabling rapid assessment of content reach. By incorporating interval-censored modeling into the state space framework, IC-Mamba captures fine-grained temporal dynamics of engagement growth, achieving a 4.72% improvement over state-of-the-art across multiple engagement metrics (likes, shares, comments, and emojis). Our experiments demonstrate IC-Mamba's effectiveness in forecasting both post-level dynamics and broader narrative patterns (F1 0.508-0.751 for narrative-level predictions). The model maintains strong predictive performance across extended time horizons, successfully forecasting opinion-level engagement up to 28 days ahead using observation windows of 3-10 days. These capabilities enable earlier identification of potentially problematic content, providing crucial lead time for designing and implementing countermeasures. Code is available at: https://github.com/ltian678/ic-mamba. An interactive dashboard demonstrating our results is available at: https://ic-mamba.behavioral-ds.science/.
The popular encrypted messaging and chat app WhatsApp played a key role in the election of Brazilian President Jair Bolsonaro in 2018. The present study builds on this knowledge and showcases how the app continued to be used in a governmental operation spreading false and misleading information popularly known in Brazil as the Office of Hatred (OOH). By harnessing in-depth expert interviews with documentarians of the office’s daily operations—researchers, journalists, and fact-checkers (N = 10)—this study draws up a chronology of the OOH. Via this methodological approach, we trace and chronologize events, actions, and actors associated with the OOH. Specifically, findings (a) document the rise of antipetismo and disinformation campaigns associated with attacks on the Brazilian Worker’s party from 2012 until the election of Bolsonaro in 2018, (b) describe the emergence of the OOH at the heels of the election and subsequent radicalization in WhatsApp groups, (c) provide an overview of the types of disinformation that are spread on the app by the OOH, and (d) illustrate how the OOH operates by mapping key actors and places, communicative strategies, and audiences. These findings are discussed in light of ramifications that government-sponsored forms of disinformation might have in other antidemocratic polities marked by strongman populist leadership.
Recent evidence has emerged linking coordinated campaigns by state-sponsored actors to manipulate public opinion on the Web. Campaigns revolving around major political events are enacted via mission-focused ?trolls." While trolls are involved in spreading disinformation on social media, there is little understanding of how they operate, what type of content they disseminate, how their strategies evolve over time, and how they influence the Web's in- formation ecosystem. In this paper, we begin to address this gap by analyzing 10M posts by 5.5K Twitter and Reddit users identified as Russian and Iranian state-sponsored trolls. We compare the behavior of each group of state-sponsored trolls with a focus on how their strategies change over time, the different campaigns they embark on, and differences between the trolls operated by Russia and Iran. Among other things, we find: 1) that Russian trolls were pro-Trump while Iranian trolls were anti-Trump; 2) evidence that campaigns undertaken by such actors are influenced by real-world events; and 3) that the behavior of such actors is not consistent over time, hence detection is not straightforward. Using Hawkes Processes, we quantify the influence these accounts have on pushing URLs on four platforms: Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab. In general, Russian trolls were more influential and efficient in pushing URLs to all the other platforms with the exception of /pol/ where Iranians were more influential. Finally, we release our source code to ensure the reproducibility of our results and to encourage other researchers to work on understanding other emerging kinds of state-sponsored troll accounts on Twitter.
Abstract Foreign actors, particularly Russia and China, are using disinformation as a tool to sow doubts and counterfactuals within the U.S. population. This tactic is not new. From Nazi influence campaigns in the United States to the Soviets spreading lies about the origins of HIV, disinformation has been a powerful tool throughout history. The modern “information age” and the reach of the internet has only exacerbated the impact of these sophisticated campaigns. What then can be done to limit the future effectiveness of the dissemination of foreign states’ disinformation? Who has the responsibility and where does the First Amendment draw the boundaries of jurisdiction?
Disinformation campaigns originating from Russia has been a frequently debated subject in the recent years. Disinformation also plays a major role in the Russian–Ukrainian war that started in February 2022. The issue has been on the agenda in the European Union in recent years, so it is not surprising that to the many sanctions the EU introduced against Russia, action against disinformation was also added. This paper sets out to describe the previously unprecedented ban on Russian media service providers, including the problems the provision creates for freedom of expression. In particular, it will examine the content of the Decision and the Regulation, which prohibited the distribution of the Russian media outlets concerned and the consequences of the EU legislation, before going on to critically analyse the provisions from the perspective of freedom of expression, and finally, the relevant judgments of the Court of Justice of the European Union.
Significant attention has been devoted to determining the credibility of online misinformation about the COVID-19 pandemic on social media. Here, we compare the credibility of tweets about COVID-19 to datasets pertaining to other health issues. We find that the quantity of information about COVID-19 is indeed overwhelming, but that the majority of links shared cannot be rated for its credibility. Reasons for this failure to rate include widespread use of social media and news aggregators. The majority of links that could be rated came from credible sources; however, we found a large increase in the proportion of state-sponsored propaganda among non-credible and less credible URLs, suggesting that COVID-19 may be used as a vector to spread misinformation and disinformation for political purposes. Overall, results indicate that COVID-19 is unfolding in a highly uncertain information environment that not may amenable to fact-checking as scientific understanding of the disease, and appropriate public health measures, evolve. As a consequence, public service announcements must adequately communicate the uncertainly underlying these recommendations, while still encouraging healthy behaviors.
In recent years, there has been an increased prevalence of adopting state-sponsored trolls by governments and political organizations to influence public opinion through disinformation campaigns on social media platforms. This phenomenon negatively affects the political process, causes distrust in the political systems, sows discord within societies, and hastens political polarization. Thus, there is a need to develop automated approaches to identify sponsored-troll accounts on social media in order to mitigate their impacts on the political process and to protect people against opinion manipulation. In this paper, we argue that behaviors of sponsored-troll accounts on social media are different from ordinary users’ because of their extrinsic motivation, and they cannot completely hide their suspicious behaviors, therefore these accounts can be identified using machine learning approaches based solely on their behaviors on the social media platforms. We have proposed a set of behavioral features of users’ activities on Twitter. Based on these features, we developed four classification models to identify political troll accounts, these models are based on decision tree, random forest, Adaboost, and gradient boost algorithms. The models were trained and evaluated on a set of Saudi trolls disclosed by Twitter in 2019, the overall classification accuracy reaches up to 94.4%. The models also are capable to identify the Russian trolls with accuracy up to 72.6% without training on this set of trolls. This indicates that although the strategies of coordinated trolls might vary from an organization to another, they are all just employees and have common behaviors that can be identified.
State media plays a central role in empowering the authoritarian regimes to subjugate their people. In Iran, this undemocratic practice largely takes place in the form of information manipulation where the state-run national media generate deceptive messages to sway the masses and influence the public opinion. Yet, the rapid advancements of communication technologies in the past two decades have gradually shaken this power structure in a variety of ways, among them enabling a form of participatory media literacy (PML) among dissident youth. Characterized by virtual social networking, PML has progressively grown from the youth’s online activism and political engagement, especially during the periods of social unrest. Intersecting ‘participatory culture’, this form of media literacy emanates from a rather organic and spontaneous progression in knowledge and experience acquisition which distinguishes it from the conventional learning of the subject in the educational institutes, both in terms of the scope limitation as well as the learners’ deliberation. Typically, the process involves the members’ active participation (content sharing, discussions and critical evaluation of the presumably disinformation cases) in social media and other oppositional online communities. Taking Telegram as a popular social networking platform for the young activists’ PML, this study uses netnography to examine the contents of some of the subversive Telegram channels (STCs), providing examples of the disinformation cases discussed/evaluated by the members. Ultimately, it is argued that in authoritarian nations, PML offers opportunities for nullifying the state’s disinformation campaigns and their preventive effects on the progress of social movements and political change.
No abstract available
In February 2024, the Superior Electoral Court of Brazil met in session to determine rules and regulations to guide the municipal elections of the country in October of 2024. Among the topics debated, the leading one was the deliberations on illicit acts in electoral advertisements in digital media, concerning the use of artificial intelligence, ensuing in regulations on the use of technology in developing, producing and distribution of political advertisements in digital forms such as: audio, video or audiovisual media. The article will discuss the deliberations of the Superior Electoral Court, which based its decisions on current legislation in the European Union and in the 2023 general elections in Argentina, regarding production aided by artificial intelligence for electoral campaigns. Argentina’s general election of 2023 was paramount and now a base of a regional case regarding the use of artificial intelligence will be debated in light of reflections on media and technology from authors such as Santaella (2023), Simon (2022), Chagas (2017) e Bucci (2024).
The article deals with the problems of institutional support of information security in the context of the full-scale war between Russia and Ukraine. The strategies and tools used by the Ukrainian state and civil society in the field of information policy to counter disinformation have been studied. Global awareness of the need to counter the hostile narratives used by Russia against Ukraine and the West is a real way out of the threatening situation caused by massive Russian disinformation campaigns. Disinformation cannot be defeated solely by closing channels or social media pages. Success requires the joint efforts of the media, public organizations, and the state. The authors of the article conclude that the most effective strategy for countering disinformation should be based on a combination of technological techniques for removing unwanted content and ensuring the dominance of the dominant mainstream narratives of the state in its own information space and in the information space of foreign countries. The authors have carried out the research on the resistance of the young generation of Ukrainians to disinformation and determined Ukraine's goals in the information war.
The Internet has provided a global mass communication system, and in particular social media technologies began a social revolution for the public sphere. However, these platforms have been exploited for the purposes of influence operations and disinformation campaigns to hinder or subvert national decision-making processes by affecting the policy makers, voters, or swaying general public opinion. Often this is achieved through manipulative means falling within a grey area of international and constitutional systems. Existing proposed normative frameworks for responsible state behaviour in Cyberspace have tended to focus on cyber operations. While online influence operations are recognised as a concern, they were not explicitly discussed in the frameworks, resulting in knowledge gaps related to countering influence operations and disinformation. There is a growing narrative that influence operations and disinformation campaigns are a cyber security issue and nations sometimes include legislation related to disinformation in cyber security. This indicates that existing cyber norms can be used to guide the development of norms for addressing disinformation and influence operations. This paper aims to propose a normative framework for state responsibility relating to influence operations emerging from thematic analysis of existing cyber norms and research on mitigating influence operations.
Drawing on thirteen years of personal experience researching Middle Eastern politics, I examine how digitality has eroded traditional boundaries between safety and danger, public and private, and democratic and authoritarian spaces. While digital tools initially promised to make research more accessible and secure, they have instead created new vulnerabilities through sophisticated spyware, state-sponsored harassment, and transnational repression. These challenges are compounded by the neoliberal university, which pushes researchers toward public engagement while offering little protection from its consequences. Moreover, the integrity of digital data itself has become increasingly questionable, as state actors and private companies deploy bots, fake personas, and coordinated disinformation campaigns that create “authenticity vacuums ” in online spaces. This essay argues that these developments necessitate a fundamental reconsideration of digital research methodologies and ethics, offering practical recommendations for institutions and researchers to navigate this complex landscape while maintaining research integrity and protecting both researchers and their subjects.
— The purpose of this research paper is to critically explore the impact of state and state-sponsored actors on the cyber environment and the future of critical infrastructure, the majority of these attacks on the cyber environment have focused more on the vulnerability of critical infrastructures, this can be evidenced in the cyber-attack by Russia on December 23 rd, 2015 that caused power outages experienced by the Ukrainian power companies which affected many customers in Ukraine [8]. Considering the enormous resources available to state and state-sponsored actors it has become difficult to detect cyber-attacks, even when the attack is discovered, proving that it was carried out by a particular state is not easy as such it is now being commonly exploited by malicious states. The paper examines the effect of the actions of the state and state- sponsored attacks on the cyber environment and critical infrastructures, these adverse effects include; greatly diminished defense capacity of the attacked states, destabilise the micro-economy, disinformation that can effectively sway public opinion in a state [1]. Consequently, being aware of the number of resources available at the disposal of the actors and the enormous negative impacts on the cyber environment and critical infrastructures, the government, agencies, and other professionals will be prepared to protect and prioritise network and security systems as a national issue thus encouraging public-private collaboration.
No abstract available
This exploratory study seeks to understand the diffusion of disinformation by examining how social media users respond to fake news and why. Using a mixed-methods approach in an explanatory-sequential design, this study combines results from a national survey involving 2501 respondents with a series of in-depth interviews with 20 participants from the small but economically and technologically advanced nation of Singapore. This study finds that most social media users in Singapore just ignore the fake news posts they come across on social media. They would only offer corrections when the issue is strongly relevant to them and to people with whom they share a strong and close interpersonal relationship.
The world has entered the age of social media, which has brought major impact to the human society. We believe it is crucial to consider not only the benefits that social media brought but also the problems. This paper focuses on social media’s contribution to the diffusion of disinformation online. This work discusses how the users, social media companies and the fundamental design of the platform supplies the diffusion and leads to the growing racism targeting Chinese people. Acknowledging this issue is the first step to relieving in the long term.
Relevance. In the modern world, social media are becoming the main channels of communication, which significantly affects the formation of public opinion and the implementation of strategic narratives. In the context of global information challenges, in particular war, understanding and predicting changes in public sentiment is extremely important for the effective implementation of state information policy, as well as for combating disinformation. The subject of the study is modeling the processes of predicting changes in public opinion during the implementation of a strategic narrative through social media. The aim of the study is to analyze the effectiveness of using diffusion of innovations models and neural networks to predict changes in socio-political sentiment, as well as to optimize content strategy on social media platforms. Main results: The study showed that social media has a significant impact on public consciousness, and the use of information dissemination models, such as Bass's diffusion of innovations model, allows predicting the spread of narratives among different groups of users. The use of neural networks to analyze socio-political sentiment provided highly accurate forecasts with good quality indicators. The results of the study emphasize the importance of adapting content strategy in social media to increase the effectiveness of influencing the audience. Conclusion. The results obtained confirm that for the successful implementation of the state's strategic narrative, it is necessary to apply combined methods of forecasting and adapting content on social platforms. Successful adaptation of content strategy, taking into account changes in user behavior and trends in socio-political sentiment, is a key factor for effective influence on public opinion and support of national interests in the digital environment.
The digital age provides new challenges as information travels more quickly in a system of increasing complexity. But it also offers new opportunities, as we can track and study the system more efficiently. Several studies individually addressed different digital tracks, focusing on specific aspects like disinformation production or content-sharing dynamics. In this work, we propose to study the news ecosystem as an information market by analysing three main metrics: Supply, Demand, and Diffusion of information. Working on a dataset relative to Italy from December 2019 to August 2020, we validate the choice of the metrics, proving their static and dynamic relations, and their potential in describing the whole system. We demonstrate that these metrics have specific equilibrium relative levels. We reveal the strategic role of Demand in leading a non-trivial network of causal relations. We show how disinformation news Supply and Diffusion seem to cluster among different social media platforms. Disinformation also appears to be closer to information Demand than the general news Supply and Diffusion, implying a potential danger to the health of the public debate. Finally, we prove that the share of disinformation in the Supply and Diffusion of news has a significant linear relation with the gap between Demand and Supply/Diffusion of news from all sources. This finding allows for a real-time assessment of disinformation share in the system. It also gives a glimpse of the potential future developments in the modelisation of the news ecosystem as an information market studied through its main drivers.
As online resources such as social media are increasingly used in disaster situations, confusion caused by the spread of false information, misinformation, and hoaxes has become an issue. Although a large amount of research has been conducted on how to suppress disinformation, i.e., the widespread dissemination of such false information, most of the research from a revenue perspective has been based on prisoner’s dilemma experiments, and there has been no analysis of measures to deal with the actual occurrence of disinformation on disaster SNSs. In this paper, we focus on the fact that one of the characteristics of disaster SNS information is that it allows citizens to confirm the reality of a disaster. Hereafter, we refer to this as collective debunking, and we propose a profit-agent model for it and conduct an analysis using an evolutionary game. As a result, we experimentally found that deception in the confirmation of disaster information uploaded to SNS is likely to lead to the occurrence of disinformation. We also found that if this deception can be detected and punished, for example by patrols, it tends to suppress the occurrence of disinformation.
Genetically modified organisms (GMOs) have caused considerable controversy in China in recent years. Uncertainty about the technology, ineffective channels for releasing official information and a lack of sufficient public trust in the government and scientists have led to rampant rumours about genetic modification technology, making it hard for the public to acquire scientific knowledge about it and a rational attitude towards it. In this paper, by using as an example the rumour that genetically modified (GM) soybeans cause cancer, we discuss the content and diffusion of rumours related to genetic modification technology in the new media environment. Based on an analysis of content on the social media platform Weibo one week after the rumour began, we discovered that the ensuing cyber discussions reflected reality, that netizens expressed anxiety and panic while stressing social injustice and reflecting conflict between social classes, and that they exhibited little trust in scientists and the government. On the mechanism of diffusion of rumours on Weibo, we observed that ‘evidence’ that directly or indirectly purported to show that GM soybeans cause cancer was added to the rumours and that the rumours were ‘assimilated’ into people's perception through the stigmatization of GMOs and through conspiracy theories.
In face of public discourses about the negative effects that social media might have on democracy in Latin America, this article provides a qualitative assessment of existing scholarship about the uses, actors, and effects of platforms for democratic life. Our findings suggest that, first, campaigning, collective action, and electronic government are the main political uses of platforms. Second, politicians and office holders, social movements, news producers, and citizens are the main actors who utilize them for political purposes. Third, there are two main positive effects of these platforms for the democratic process—enabling social engagement and information diffusion—and two main negative ones—the presence of disinformation, and the spread of extremism and hate speech. A common denominator across positive and negative effects is that platforms appear to have minimal effects that amplify pre-existing patterns rather than create them de novo.
As social media become major channels for the diffusion of news and information, it becomes critical to understand how the complex interplay between cognitive, social, and algorithmic biases triggered by our reliance on online social networks makes us vulnerable to manipulation and disinformation. This talk overviews ongoing network analytics, modeling, and machine learning efforts to study the viral spread of misinformation and to develop tools for countering the online manipulation of opinions.
No abstract available
Given the role of technology and social media during the COVID-19 pandemic, the aim of this article is to conduct a social network analysis of four COVID-19 conspiracy theories that were spread during the pandemic between March and June 2020. Specifically, in this article, we examine the 5G, Film Your Hospital, Expose Bill Gates, and the Plandemic Conspiracy theories. Identifying disinformation campaigns on social media and studying their tactics and composition is an essential step toward counteracting such campaigns. The current study draws upon data from the Twitter search application programming interface and uses social network analysis to examine patterns of disinformation that may be shared across social networks with sabotaging ramifications. The findings are used to generate the framework of disinformation seeding and information diffusion for understanding disinformation and the ideological nature of conspiracy networks that can support and inform future pandemic preparedness and counteracting disinformation. Furthermore, a Digital Mindfulness Toolbox is developed to support individuals and organizations with their information management and decision-making both in times of crisis and as strategic tools for potential crisis preparation.
The rise of ubiquitous deepfakes, misinformation, disinformation, and post-truth, often referred to as fake news, raises concerns over the role of the Internet and social media in modern democratic societies. Due to its rapid and widespread diffusion, digital deception has not only an individual or societal cost, but it can lead to significant economic losses or to risks to national security. Blockchain and other distributed ledger technologies (DLTs) guarantee the provenance and traceability of data by providing a transparent, immutable, and verifiable record of transactions while creating a peer-to-peer secure platform for storing and exchanging information. This overview aims to explore the potential of DLTs to combat digital deception, describing the most relevant applications and identifying their main open challenges. Moreover, some recommendations are enumerated to guide future researchers on issues that will have to be tackled to strengthen the resilience against cyber-threats on today's online media.
Modern societies are characterized by unprecedently broad and fast diffusion of various forms of false and harmful information. Military personnel’s motivation to defend their country may be harmed by their exposure to disinformation. Therefore, specific education and training programs should be devised for the military to systematically improve (social) media literacy and build resilience against information influence activities. In this article, we put forward a useful methodological approach to designing such programs based on a case study: the process of developing a media literacy learning platform tailored to the needs of the Estonian defense forces in 2021. The approach is grounded in data on (a) the current needs and skills of the learners, (b) the kinds of influence activities that the learners may encounter, and (c) the learning design principles that would enhance their learning experience, such as learning through play and dialogue through feedback.
Disinformation is considered to be one of the most serious global risks anticipated in the next two years, and with the increasing popularity of social media, the spread of disinformation has become an issue. While Japan has the second highest number of active X users in the world, only brief studies, such as the number of social bots, have been conducted, and one survey shows that only 5% of respondents said they were aware of terms such as echo chambers. This study analyzes the actual situation in Japan related to the spread of disinformation, focusing on social bots and echo chambers. We collected relevant data from social media on a disinformation example in Japan, which became a hot topic in the previous year, and analyzed activities of social bots and how echo chambers were formed in the timeline of information diffusion. This analysis examines how social bots are relevant to the formation of echo chambers over time. It also analyzes the influence of social bots on information diffusion and the role of echo chambers. The results of this analysis suggest that social bots may indeed form echo chambers in the diffusion of information in Japan. We also confirmed that humans reposting within echo chambers leads to greater spread of information, and that echo chambers form before the peak of information diffusion.
The proliferation of disinformation has become an issue in recent years due to the widespread use of social media. This study analyzes the recent activities of social bots in Japan, which may be used for the purpose of spreading disinformation, and clarifies the characteristics of influential social bots. Specifically, we collected data from X (formerly Twitter) on several news items that had a great impact in Japan, and analyzed the spread of information by social bots using a combination of existing tools and methods. Our analysis compared the number and percentage of social bots on X in Japanese cases to existing research analyzing those during the 2016 US presidential election, and clarified what kind of social bots are influencing the information diffusion. In all cases we examined, our analysis showed that social bot activity in Japan was more active than during the 2016 US presidential election. We also found that humans are spreading posts created by social bots, as was the case during the 2016 US presidential election. Furthermore, we confirmed that the characteristics of social bots reposted by humans on X in Japan are similar to human accounts, and it is difficult to detect them using only the profile information on the X account page.
Social media has been on the vanguard of political information diffusion in the 21st century. Most studies that look into disinformation, political influence and fake-news focus on mainstream social media platforms. This has inevitably made English an important factor in our current understanding of political activity on social media. As a result, there has only been a limited number of studies into a large portion of the world, including the largest, multilingual and multicultural democracy: India. In this paper we present our characterisation of a multilingual social network in India called ShareChat. We collect an exhaustive dataset across 72 weeks before and during the Indian general elections of 2019, across 14 languages. We investigate the cross lingual dynamics by clustering visually similar images together, and exploring how they move across language barriers. We find that Telugu, Malayalam, Tamil and Kannada languages tend to be dominant in soliciting political images (often referred to as memes), and posts from Hindi have the largest cross-lingual diffusion across ShareChat (as well as images containing text in English). In the case of images containing text that cross language barriers, we see that language translation is used to widen the accessibility. That said, we find cases where the same image is associated with very different text (and therefore meanings). This initial characterisation paves the way for more advanced pipelines to understand the dynamics of fake and political content in a multi-lingual and non-textual setting.
How the interconnectedness of the global system influences the network of ethnic conflict communication is the focus of this work. The study aims at understanding how the analyses of ethnic conflicts gather momentum and spread throughout the world through the review of literature, case studies, and media coverage, social media issues, and deliberate disinformation campaigns. The study employs a qualitative approach, drawing on secondary sources and theoretical frameworks to explore hypotheses that talk about how media coverage is rising in proportion to the ethnic conflict’s transnational diffusion, how social media usage is connected with diaspora mobilisation and conflict extension, and how the purposeful spread of fake news increases the intensity and geographical scope of the conflict. The Rohingya crisis is also one of the best examples of the shifts in communication networks and their impact on the ethnic conflicts by influencing the international community’s perception and response. Communications networks may have the potential to escalate conflict and ‘spread’ wrong information, but they also have the potential to create awareness and tackle conflict. In the light of these observations, the study offers recommendations in the areas of codes of ethical media practices, regulation of use of social media, especially in multi-ethnic societies, to prevent incitement, and techniques of combating disinformation.
The daily exposure of social media users to propaganda and disinformation campaigns has reinvigorated the need to investigate the local and global patterns of diffusion of different (mis)information content on social media. Echo chambers and influencers are often deemed responsible of both the polarization of users in online social networks and the success of propaganda and disinformation campaigns. This article adopts a data-driven approach to investigate the structuration of communities and propaganda networks on Twitter in order to assess the correctness of these imputations. In particular, the work aims at characterizing networks of propaganda extracted from a Twitter dataset by combining the information gained by three different classification approaches, focused respectively on (i) using Tweets content to infer the “polarization” of users around a specific topic, (ii) identifying users having an active role in the diffusion of different propaganda and disinformation items, and (iii) analyzing social ties to identify topological clusters and users playing a “central” role in the network. The work identifies highly partisan community structures along political alignments; furthermore, centrality metrics proved to be very informative to detect the most active users in the network and to distinguish users playing different roles; finally, polarization and clustering structure of the retweet graphs provided useful insights about relevant properties of users exposure, interactions, and participation to different propaganda items.
No abstract available
Abstract Social media, seen by some as the modern public square, is vulnerable to manipulation. By controlling inauthentic accounts impersonating humans, malicious actors can amplify disinformation within target communities. The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Here we use a social media model that simulates information diffusion in an empirical network to quantify the impacts of adversarial manipulation tactics on the quality of content. We find that the presence of hub accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation. Among the explored tactics that bad actors can employ, infiltrating a community is the most likely to make low-quality content go viral. Such harm can be further compounded by inauthentic agents flooding the network with low-quality, yet appealing content, but is mitigated when bad actors focus on specific targets, such as influential or vulnerable individuals. These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
No abstract available
The study combines domain expertise and computational community detection to uncover what role citizen journalists and social media platforms play in mediating the dynamics of conflict in Mali. Under conditions of the growing conflict in Mali, citizen journalists are opening Twitter (rebranded as X) accounts to stay updated and tweet about the ongoing socio-political tensions, chronicling life in a conflict-ravaged context. This article conceptualizes the rapid reliance on Twitter among citizen journalists consisting of bloggers, activists, government officials and NGO’s as a form of networked conflict and networked journalism. Networked journalism emerges as professional journalists adopt tools and techniques used by nonprofessionals (and vice versa) to gather and disseminate information while networked conflict involves the consequential and intricate relationship between social media and conflict in the Sahel region of Africa. Our findings show that Twitter is a source of action that promotes and mediates conflict, which exposes users to conflict-related content. The findings also show that what accounts for citizen journalism in a conflict setting is vague as those with access to Twitter and as such, the presumed ability to influence the narrative, unequivocally consider themselves citizen journalists.
Digital technologies are foundational to the authoritarian Far Right plan to “eradicate transgenderism” from public life. In this commentary, we sketch out how the Far Rights has weaponized digital anti-trans disinformation to bolster authoritarianism in the United States. Far Right actors have framed social media platforms as causing transness at the same time that they wield the same tools themselves to create and distribute disinformation about trans people and wage a culture war against “gender ideology.” We note how anxieties over both reproduction in service to the nation state and the potential fertility of gender nonconforming children and adolescents have been central to the production of this trans moral panic. Second, we ask, how might trans people create digital information ecologies beyond mere opposition to anti-trans disinformation? Rather than perpetually taking a reactive, or reactionary, stance to the constant influx of trans disinformation, we consider the political potential of alternative trans information ecologies.
Public trust in the news media has eroded in the United States. This study examines how perceptions of misinformation (PMI) and disinformation (PDI) affect the consumption of traditional media, social media, and artificial intelligence (AI) news, and whether this relationship is moderated by political ideology and media trust. Findings from a pre-registered experiment (N = 637) revealed that PMI and PDI regarding traditional and social media news lowered intentions to consume news from traditional, social, and AI sources. We found no significant moderating effect of political ideology or media trust. The implications of the findings are discussed.
This study examines the linguistic features of political disinformation on Indonesian social media during the 2024 presidential election using Systemic Functional Linguistics (SFL). The study employed a qualitative descriptive method, collecting data from Twitter (now X) and Instagram posts related to the 2024 Indonesian presidential election, focusing on posts containing clear evidence of disinformation. The analysis mapped the lexicogrammatical features of disinformation at three levels: ideational, interpersonal, and textual metafunctions to explain how these features shape the overall discourse. The study reveals frequent use of Material and Relational processes to misrepresent political figures, while Verbal processes distort statements through selective quoting. Attitude, engagement, and graduation features are also prominent, with disinformation posts expressing strong negative judgments about political opponents. Engagement techniques, such as selective citation and heteroglossia, create an illusion of balanced argument, while graduation features amplify emotional intensity through exaggerated language and forceful assertions. Disinformation posts rely on declarative clauses, rhetorical questions, and high modality to present falsehoods as factual, while causal conjunctions and marked themes enhance the coherence of biased narratives. The study underscores the need for a metalinguistic approach to social media literacy, equipping users with tools to critically analyze disinformation.
This study explores the role of User-Generated Content (UGC) in shaping public discourse, digital activism, and disinformation within the context of Nigeria’s #EndSARS protest movement. Drawing on a sentiment analysis of social media data collected via Brand24 and a thematic literature review, the research examines how UGC influences perceptions of governance, institutional accountability, and police misconduct. Findings reveal a predominance of negative sentiment, driven by public dissatisfaction with systemic injustices, alongside neutral and positive expressions reflecting political commentary and civic resilience. Key themes discussed include youth-led activism, digital dissent, and the legacy of the EndSARS protests. While UGC empowered civic engagement and amplified marginalized voices, it also exposed users to misleading content, illustrating the complex dynamics between participatory media and information credibility. The study contributes to the literature by demonstrating the dual potential of UGC in both advancing and undermining democratic discourse in African digital ecosystems. It also proposes a methodological framework for future research combining digital metrics with qualitative thematic analysis.
In today's digital era, disinformation presents a significant challenge, affecting politics, society, and the economy. The rapid dissemination of false information, facilitated by social media platforms and online anonymity, renders traditional fact-checking and educational campaigns increasingly ineffective. This paper explores the potential of blockchain technology to combat disinformation through its core attributes of security, transparency, and immutability. By leveraging a decentralized approach, blockchain can trace the origin and alterations of information, aiding in its verification and helping to maintain public trust. Our research reviews multiple studies that address the intersection of disinformation and blockchain technology, providing a comprehensive overview of existing solutions and identifying gaps that our proposed system seeks to address. We propose a blockchain-based system for verifying news on social media using a social trust model that integrates AI and human insights to enhance content verification accuracy. This system aims to curb the spread of misleading content by linking the dissemination of false information to user reputation, thereby promoting a more trustworthy information ecosystem.
The spread of misinformation has emerged as a global concern. Academic attention has recently shifted to emphasize the role of political elites as drivers of misinformation. Yet, little is known of the relationship between party politics and the spread of misinformation—in part due to a dearth of cross-national empirical data needed for comparative study. This article examines which parties are more likely to spread misinformation, by drawing on a comprehensive database of 32M tweets from parliamentarians in 26 countries, spanning 6 years and several election periods. The dataset is combined with external databases such as Parlgov and V-Dem, linking the spread of misinformation to detailed information about political parties and cabinets, thus enabling a comparative politics approach to misinformation. Using multilevel analysis with random country intercepts, we find that radical-right populism is the strongest determinant for the propensity to spread misinformation. Populism, left-wing populism, and right-wing politics are not linked to the spread of misinformation. These results suggest that political misinformation should be understood as part and parcel of the current wave of radical right populism, and its opposition to liberal democratic institution.
The purpose of this study was to investigate how artificial intelligence (AI) influences and improves computational propaganda and misinformation efforts. The growing complexity of AI-driven technologies, like deepfakes, bots, and algorithmic manipulation, which have turned conventional propaganda strategies into more widespread and damaging media manipulation techniques, served as the researcher's inspiration. The study used a mixed-methods approach, combining quantitative data analysis from academic studies and digital forensic investigations with qualitative case studies of misinformation efforts. The results brought to light important tactics including the platform-specific use of X (formerly Twitter) to propagate false information, emotional exploitation through fear-based messaging, and purposeful amplification through bot networks. According to this research, AI technologies enhanced controversial content by taking use of algorithmic biases, so generating echo chambers and eroding confidence in democratic processes. The study also emphasized how deepfake technologies and their ability to manipulate susceptible populations' emotions present ethical and sociopolitical issues. In order to counteract AI-generated misinformation, the study suggested promoting digital literacy and creating more potent detection methods, such digital watermarking. Future studies should concentrate on the long-term psychological effects of AI-driven misinformation on democratic participation, public trust, and regulatory reactions in various countries. Furthermore, investigating how new AI technologies are influencing other media, like video games and virtual reality, may help humans better comprehend how they affect society as a whole.
The pervasive abuse of misinformation to influence public opinion on social media has become increasingly evident in various domains, encompassing politics, as seen in presidential elections, and healthcare, most notably during the recent COVID-19 pandemic. This threat has grown in severity as the development of Large Language Models (LLMs) empowers manipulators to generate highly convincing deceptive content with greater efficiency. Furthermore, the recent strides in chatbots integrated with LLMs, such as ChatGPT, have enabled the creation of human-like interactive social bots, posing a significant challenge to both human users and the social-bot-detection systems of social media platforms.These challenges motivate researchers to develop algorithms to mitigate misinformation and social media manipulations. This tutorial introduces the advanced machine learning researches that are helpful for this goal, including (1) detection of social manipulators, (2) learning causal models of misinformation and social manipulation, and (3) LLM-generated misinformation detection. In addition, we also present possible future directions.
In this study, we examine the role of Twitter as a first line of defense against misinformation by tracking the public engagement with, and the platform’s response to, 500 tweets concerning the Russo-Ukrainian conflict which were identified as misinformation. Using a real-time sample of 543475 of their retweets, we find that users who geolocate themselves in the U.S. both produce and consume the largest portion of misinformation, however accounts claiming to be in Ukraine are the second largest source. At the time of writing, 84% of these tweets were still available on the platform, especially those having an anti-Russia narrative. For those that did receive some sanctions, the retweeting rate has already stabilized, pointing to ineffectiveness of the measures to stem their spread. These findings point to the need for a change in the existing anti-misinformation system ecosystem. We propose several design and research guidelines for its possible improvement.
This study employed a hybrid methodological approach that integrated machine learning, natural language processing, and deep learning to evaluate AI algorithms for real-time misinformation detection. Using a dataset of 10,000 entries balanced across true, false, and uncertain claims, models were trained and tested on accuracy, precision, recall, F1-score, and receiver operating characteristic area under the curve metrics. Real-time capabilities were assessed on 5,000 live social media posts collected during the Trump versus Harris debate. This allowed for a critical evaluation of the models in real-world settings. Data were sourced from reputable news outlets, misinformation sites, and social media platforms, employing relevant hashtags and keywords related to misinformation narratives. The results show that transformer-based models, particularly bidirectional encoder representations from transformer (BERT) and generative pretrained transformer, outperformed traditional machine learning models like support vector machines, Naive Bayes, and Random Forest, demonstrating superior accuracy, precision, and contextual understanding. BERT achieved the highest performance with an accuracy of 94.8% and a precision of 93.5%. However, the computational demands of these models posed significant challenges for real-time deployment, thus, highlighting the need for optimization strategies such as hyperparameter tuning and model compression. The study also addressed ethical concerns, using adversarial testing and interpretability tools like local interpretable model-agnostic explanations to ensure fairness and transparency. Models trained on fact-checked datasets outperformed those trained on unverified social media data, underscoring the impact of training data quality on model performance.
No abstract available
Competition among news sources over public opinion can incentivize them to resort to misinformation. Sharing misinformation may lead to a short-term gain in audience engagement but ultimately damages the credibility of the source, resulting in a loss of audience. To understand the rationale behind news sources sharing misinformation, we model the competition between sources as a zero-sum sequential game, where news sources decide whether to share factual information or misinformation. Each source influences individuals based on their credibility, the veracity of the article, and the individual’s characteristics. We analyze this game through the concept of quantal response equilibrium, which accounts for the bounded rationality of human decision-making. The analysis shows that the resulting equilibria reproduce the credibility-opinion distribution of real-world news sources, with hyperpartisan sources spreading the majority of misinformation. Our findings provide insights for policymakers to mitigate the spread of misinformation and promote a more factual information landscape.
This study examines climate and energy misinformation in Taiwan using data from fact-checkers. Our findings highlight four primary themes: renewable delayism, distrust in power infrastructure, nuclear distraction, and misleading climate action. Renewable delayism exaggerates the limitations and negative impacts of renewable energy, particularly solar power, to delay its adoption. Distrust in power infrastructure spreads fear about the reliability and safety of Taiwan’s electric grid, undermining public confidence in government energy management. Nuclear distraction shifts focus from renewable energy to nuclear power and spreads misinformation about Japan’s nuclear wastewater. Misleading Climate action is a broad category that either caricatures climate advocacy or creates undue anxiety about the consequences of addressing climate change. Much of this misinformation originates from Chinese-speaking cyberspace, with some evidence of state-sponsored operations. These activities erode trust in climate and energy policies, create confusion, and potentially paralyze necessary actions. This study contributes to the broader literature by offering insights from a non-Western context and emphasizing the importance of considering local media environments in tackling climate misinformation.
In the post-truth age, political conspiracies circulate rapidly on social media, cultivating false narratives, while challenging the public’s ability to distinguish truth from fiction. ‘Deepfakes’ represent the most recent type of misinformation. They display deceitful representations of events to lead audiences to believe in fabricated realities. There has been limited research on deepfakes in political communications. As this technology progresses, deepfakes look deceptively authentic; thus, it is necessary to explore their effects on public perceptions. This study examines viewers’ comments on an Instagram-published deepfake video of Hillary Clinton to understand the impact of this technology. The results demonstrate that individuals struggle to identify deepfake videos and that their opinions are affected by this persuasive type of misinformation. This study also explores different ethical concerns posed by political deepfakes. By offering insights into public reactions to manipulated content, this study contributes to our understanding of the political effects of AI-fabricated content.
ABSTRACT Climate change is becoming a new front in the culture wars, with YouTube as one of its key arenas. Centered on an “Alternative Influence Network” orbiting Spain’s right-wing populist party Vox, this article examines the underexplored role of YouTube political influencers in propagating climate misinformation. Using thematic analysis, it uncovers instances of “post-denial” narratives that accept the reality of climate change while targeting climate policy and the climate movement, often through conspiracy theories and misogynistic rhetoric. Disagreements extend beyond policy specifics, intertwining with ongoing culture wars against a “woke wave” encompassing feminism, anti-racism, and now environmentalism. Amidst escalating opposition to Net Zero policies, the study sheds light on how these climate narratives reinforce “us” vs “them” binaries and appeal to feelings of resentment among young white males disoriented by rapid cultural change, who increasingly turn to YouTube for news and community. Despite these divisions, the study identifies potential common ground in environmental values and benefits like clean air.
Older adults habitually encounter misinformation, yet little is known about their experiences with it. In this study, we employed a mixed-methods approach, combining a survey (n=119) with semi-structured interviews (n=21), to investigate how older adults in America conceptualize, discern, and contextualize social media misinformation. Given the historical context of misinformation being used to influence voting outcomes, our study specifically examined this phenomenon from a voting intention perspective. Our findings reveal that 62% of participants intending to vote Democrat perceived a manipulative political purpose behind the spread of misinformation, whereas only 5% of those intending to vote Republican believed that misinformation serves a political dissent purpose. Regardless of voting intentions, most participants relied on source heuristics and fact-checking to discern truth from misinformation on social media. A major concern among participants was the biased reasoning influenced by personal values and emotions affected by misinformation. Notably, 74% of participants intending to vote Democrat were concerned that misinformation would escalate extremism in the future. In contrast, those intending to vote Republican, those undecided, or those planning to abstain expressed concerns that misinformation would further erode trust in democratic institutions, particularly in public health and free and fair elections. During our interviews, we discovered that 63% of participants intending to vote Republican mentioned that Republican or conservative voices often disseminate misinformation, even though these participants were closely aligned with this political ideology.
Abstract Can large language models, a form of artificial intelligence (AI), generate persuasive propaganda? We conducted a preregistered survey experiment of US respondents to investigate the persuasiveness of news articles written by foreign propagandists compared to content generated by GPT-3 davinci (a large language model). We found that GPT-3 can create highly persuasive text as measured by participants’ agreement with propaganda theses. We further investigated whether a person fluent in English could improve propaganda persuasiveness. Editing the prompt fed to GPT-3 and/or curating GPT-3’s output made GPT-3 even more persuasive, and, under certain conditions, as persuasive as the original propaganda. Our findings suggest that propagandists could use AI to create convincing content with limited effort.
No abstract available
The use of propaganda has spiked on mainstream and social media, aiming to manipulate or mislead users. While efforts to automatically detect propaganda techniques in textual, visual, or multimodal content have increased, most of them primarily focus on English content. The majority of the recent initiatives targeting medium to low-resource languages produced relatively small annotated datasets, with a skewed distribution, posing challenges for the development of sophisticated propaganda detection models. To address this challenge, we carefully develop the largest propaganda dataset to date, ArPro, comprised of 8K paragraphs from newspaper articles, labeled at the text span level following a taxonomy of 23 propagandistic techniques. Furthermore, our work offers the first attempt to understand the performance of large language models (LLMs), using GPT-4, for fine-grained propaganda detection from text. Results showed that GPT-4’s performance degrades as the task moves from simply classifying a paragraph as propagandistic or not, to the fine-grained task of detecting propaganda techniques and their manifestation in text. Compared to models fine-tuned on the dataset for propaganda detection at different classification granularities, GPT-4 is still far behind. Finally, we evaluate GPT-4 on a dataset consisting of six other languages for span detection, and results suggest that the model struggles with the task across languages. We made the dataset publicly available for the community.
In today's digital age, characterized by rapid news consumption and increasing vulnerability to propaganda, fostering citizens' critical thinking is crucial for stable democracies. This paper introduces the design of ClarifAI, a novel automated propaganda detection tool designed to nudge readers towards more critical news consumption by activating the analytical mode of thinking, following Kahneman's dual-system theory of cognition. Using Large Language Models, ClarifAI detects propaganda in news articles and provides context-rich explanations, enhancing users' understanding and critical thinking. Our contribution is threefold: first, we propose the design of ClarifAI; second, in an online experiment, we demonstrate that this design effectively encourages news readers to engage in more critical reading; and third, we emphasize the value of explanations for fostering critical thinking. The study thus offers both a practical tool and useful design knowledge for mitigating propaganda in digital news.
At least since Francis Bacon, the slogan 'knowledge is power' has been used to capture the relationship between decision-making at a group level and information. We know that being able to shape the informational environment for a group is a way to shape their decisions; it is essentially a way to make decisions for them. This paper focuses on strategies that are intentionally, by design, impactful on the decision-making capacities of groups, effectively shaping their ability to take advantage of information in their environment. Among these, the best known are political rhetoric, propaganda, and misinformation. The phenomenon this paper brings out from these is a relatively new strategy, which we call slopaganda. According to The Guardian, News Corp Australia is currently churning out 3000 'local' generative AI (GAI) stories each week. In the coming years, such 'generative AI slop' will present multiple knowledge-related (epistemic) challenges. We draw on contemporary research in cognitive science and artificial intelligence to diagnose the problem of slopaganda, describe some recent troubling cases, then suggest several interventions that may help to counter slopaganda.
Propagandists use rhetorical devices that rely on logical fallacies and emotional appeals to advance their agendas. Recognizing these techniques is key to making informed decisions. Recent advances in Natural Language Processing (NLP) have enabled the development of systems capable of detecting manipulative content. In this study, we look at several Large Language Models and their performance in detecting propaganda techniques in news articles. We compare the performance of these LLMs with transformer-based models. We find that, while GPT-4 demonstrates superior F1 scores (F1=0.16) compared to GPT-3.5 and Claude 3 Opus, it does not outperform a RoBERTa-CRF baseline (F1=0.67). Additionally, we find that all three LLMs outperform a MultiGranularity Network (MGN) baseline in detecting instances of one out of six propaganda techniques (name-calling), with GPT-3.5 and GPT-4 also outperforming the MGN baseline in detecting instances of appeal to fear and flag-waving.
The rise of social media in the digital era poses unprecedented challenges to authoritarian regimes that aim to influence public attitudes and behaviors. To address these challenges, we argue that authoritarian regimes have adopted a decentralized approach to produce and disseminate propaganda on social media. In this model, tens of thousands of government workers and insiders are mobilized to produce and disseminate propaganda, and content flows in a multidirectional, rather than a top‐down manner. We empirically demonstrate the existence of this new model in China by creating a novel data set of over five million videos from over 18,000 regime‐affiliated accounts on Douyin, a popular social media platform in China. This paper supplements prevailing understandings of propaganda by showing theoretically and empirically how digital technologies are transforming not only the content of propaganda, but also how propaganda materials are produced and disseminated.
This study explores bias and propaganda detection in Arabic social media narratives surrounding the Israel-Gaza War (2023). Given the influence of biased content on public opinion and political discourse, we employ Natural Language Processing (NLP) techniques, Arabic BERT transformers, and traditional machine learning models, including SVM, Logistic Regression, and SGD. Using the Sina dataset from FigNews 2024 (October 7, 2023 - January 31, 2024), AraBERT achieves the highest accuracy for bias detection 74.00% with a precision of 68.00%, recall of 64.00%, and F1-score of 65.00%. For propaganda detection, SVM attains the highest accuracy 67.42% with a precision of 56.40%, recall of 51.51%, and an F1-score of 46.37%. The results highlight the impact of class imbalance on recall and F1-score, affecting overall performance. While BERT models excel in capturing linguistic nuances, traditional classifiers remain competitive for specific tasks. Future work will focus on mitigating class imbalance and exploring hybrid approaches to improve Arabic text classification in conflict-related discourse.
Cyber propaganda has become an increasingly sophisticated tool for manipulating public perception and discourse within online social networks (OSNs). The effectiveness of cyber propaganda is strongly influenced by the interplay between individual awareness and the underlying topology of OSNs that facilitates the spread of propaganda. However, existing interventions primarily focus on continuous control strategies, which may not be feasible in certain real-world scenarios. Therefore, effectively suppressing the spread of cyber propaganda while taking into account the above impact factors remains a challenging problem. In this study, we propose a methodology that combines the optimal impulse control (OIC) theory with a novel propagation model to address this problem. Our propagation model is the first to take into account the effects of the cognitive differences and interconnectivity of OSNs on the dynamics of cyber propaganda. By employing the OIC framework and our newly developed propagation model, we formulate an OIC problem. The goal is to find impulse strategies that optimally balance the cost of intervention against its effectiveness. Using the impulse maximum principle, we establish the necessary conditions for optimal impulse strategies and construct an algorithm to solve the OIC problem. Our numerical experiments, conducted on three distinct social networks, demonstrated that: 1) awareness levels play a crucial role in effectively suppressing the spread of cyber propaganda on OSNs; and 2) our impulse strategies are significantly superior to random strategies in terms of suppression effect, thereby evidencing their cost-effectiveness.
Social networks are a battlefield for political propaganda. Protected by the anonymity of the internet, political actors use computational propaganda to influence the masses. Their methods include the use of synchronized or individual bots, multiple accounts operated by one social media management tool, or different manipulations of search engines and social network algorithms, all aiming to promote their ideology. While computational propaganda influences modern society, it is hard to measure or detect it. Furthermore, with the recent exponential growth in large language models (L.L.M), and the growing concerns about information overload, which makes the alternative truth spheres more noisy than ever before, the complexity and magnitude of computational propaganda is also expected to increase, making their detection even harder. Propaganda in social networks is disguised as legitimate news sent from authentic users. It smartly blended real users with fake accounts. We seek here to detect efforts to manipulate the spread of information in social networks, by one of the fundamental macro-scale properties of rhetoric—repetitiveness. We use 16 data sets of a total size of 13 GB, 10 related to political topics and 6 related to non-political ones (large-scale disasters), each ranging from tens of thousands to a few million of tweets. We compare them and identify statistical and network properties that distinguish between these two types of information cascades. These features are based on both the repetition distribution of hashtags and the mentions of users, as well as the network structure. Together, they enable us to distinguish (p − value = 0.0001) between the two different classes of information cascades. In addition to constructing a bipartite graph connecting words and tweets to each cascade, we develop a quantitative measure and show how it can be used to distinguish between political and non-political discussions. Our method is indifferent to the cascade’s country of origin, language, or cultural background since it is only based on the statistical properties of repetitiveness and the word appearance in tweets bipartite network structures.
PurposeA large part of the misinformation, fake news, and propaganda spread on social media originates from content disseminated via online social network platforms, such as X (formerly Twitter) and Facebook. The control and filtering of digital media pose significant challenges and threats to online social networking. This paper aims to understand how propaganda infiltrates news articles, which is critical for fully grasping its impact on daily life.Design/methodology/approachThis study introduces a pre-trained language model framework, called ProST, to detect propaganda in text-based news articles. ProST addresses two tasks: identifying propaganda spans and classifying propaganda techniques. For span identification, we built a model combining a pre-trained RoBERTa model with long-short-term memory and begin, inside, outside and end tagging to detect propaganda spans. The technique classification model uses contextual features and a RoBERTa-based approach. This study, conducted on the SemEval-2020 dataset (comprising 536 news articles), demonstrates a performance comparable to state-of-the-art methods.FindingsThe results indicate that the ProST model is highly effective in detecting propaganda in text news articles, accurately identifies propaganda spans and classifies techniques with high precision, benefitting from sentence- and span-level feature pruning.Originality/valueThe ProST model offers a novel approach to identifying propaganda in online news articles with diverse webs of information. To the best of our knowledge, this is the first framework capable of classifying both propaganda spans and techniques in textual news. Accordingly, ProST represents a significant advancement in the field of propaganda.
The paper presents the development of a smart tool for automated analysis of news text content in order to identify propaganda narratives and disinformation. The relevance of the project is due to the growth of the information threat in the context of a hybrid war, in particular in the Ukrainian information space. The proposed solution is implemented in the form of a browser plugin that provides instant analysis of content without the need to switch to third-party services. The methodology is based on the use of modern natural language processing (NLP) and deep learning methods (in particular, BERT models) to classify content according to the level of propaganda impact and identify key narratives. As part of the study, modern models of transformers for text analysis, in particular BERT, were used. For the task of classifying propaganda, pre-trained GloVe vectors optimised for news articles were used, which provided the best results among the options considered. Instead, the BERT model was used to classify narratives, which showed higher accuracy in the processing of texts reflecting subjective thoughts. The adaptation included the use of a multilingual version of BERT (multilingual BERT), as it allows you to effectively work with Ukrainian-language data, which is a key advantage for localised analysis in the context of information warfare. Before using BERT, pre-processing of texts was carried out with the addition of syntactic, punctuation, emotional and stylistic features, which increased the accuracy of classification. For a more complete and reliable assessment of the effectiveness of propaganda classification models and narratives, a set of key metrics was used for propaganda/ narratives analyses Accuracy (0.94/0.86), Precision (0.95/0.69), Recall (0.96/0.71) and F1-score (0.96/0.70).The developed model showed high accuracy results: the F1-score for the propaganda classification problem was 0.96 and for the narrative classification problem – 0.70, which significantly exceeds the results of similar approaches, in particular XGBoost (0.92 and 0.50, respectively). In addition, the system supports full-fledged work with Ukrainian-language content, which is its key competitive advantage. The practical application of the tool covers journalism, fact-checking, analytics, and improving media literacy among citizens, contributing to the improvement of the state's information security.
During the COVID-19 pandemic, social media platforms emerged as both vital information sources and conduits for the rapid spread of propaganda and misinformation. However, existing studies often rely on single-label classification, lack contextual sensitivity, or use models that struggle to effectively capture nuanced propaganda cues across multiple categories. These limitations hinder the development of robust, generalizable detection systems in dynamic online environments. In this study, we propose a novel deep learning (DL) framework grounded in fine-tuning the RoBERTa model for a multi-label, multi-class (ML-MC) classification task, selecting RoBERTa due to its strong contextual representation capabilities and demonstrated superiority in complex NLP tasks. Our approach is rigorously benchmarked against traditional and neural methods, including, TF-IDF with n-grams, Conditional Random Fields (CRFs), and long short-term memory (LSTM) networks. While LSTM models show strong performance in capturing sequential patterns, our RoBERTa-based model achieves the highest overall accuracy at 88%, outperforming state-of-the-art baselines. Framed within the diffusion of innovations theory, the proposed model offers clear relative advantages—including accuracy, scalability, and contextual adaptability—that support its early adoption by Information Systems researchers and practitioners. This study not only contributes a high-performing detection model but also delivers methodological and theoretical insights for combating propaganda in digital discourse, enhancing resilience in online information ecosystems.
This article conceptualizes the emerging phenomenon of ‘influencer propaganda’, which we define as the various persuasive, strategic communicative actions by social media influencers that promote political and ideological agendas through popular content and emotional appeals, with the intent to affect behaviour and belief among their followers. The current discussion of such practices frequently reproduces a dichotomy of ‘illegitimate’ vs ‘legitimate’ political communication. In contrast to this, we propose viewing influencer propaganda as a near-global political activity that can be found in both domestic and external spheres, regardless of political system and national context. By illustrating this phenomenon with observations from China, we argue that influencer propaganda in China and beyond is not solely dictated by traditional political actors but is instead negotiated through the interplay of political, corporate, and personal interests, all mediated by the affordances of digital platforms and embodied in the influencers’ digital performances.
Propaganda detection on social media remains challenging due to task complexity and limited high-quality labeled data. This paper introduces a novel framework that combines human expertise with Large Language Model (LLM) assistance to improve both annotation consistency and scalability. We propose a hierarchical taxonomy that organizes 14 fine-grained propaganda techniques into three broader categories, conduct a human annotation study on the HQP dataset that reveals low inter-annotator agreement for fine-grained labels, and implement an LLM-assisted pre-annotation pipeline that extracts propagandistic spans, generates concise explanations, and assigns local labels as well as a global label. A secondary human verification study shows significant improvements in both agreement and time-efficiency. Building on this, we fine-tune smaller language models (SLMs) to perform structured annotation. Instead of fine-tuning on human annotations, we train on high-quality LLM-generated data, allowing a large model to produce these annotations and a smaller model to learn to generate them via knowledge distillation. Our work contributes towards the development of scalable and robust propaganda detection systems, supporting the idea of transparent and accountable media ecosystems in line with SDG 16. The code is publicly available at our GitHub repository.
Generative propaganda is the use of generative artificial intelligence (AI) to shape public opinion. To characterize its use in real-world settings, we conducted interviews with defenders (e.g., factcheckers, journalists, officials) in Taiwan and creators (e.g., influencers, political consultants, advertisers) as well as defenders in India, centering two places characterized by high levels of online propaganda. The term"deepfakes", we find, exerts outsized discursive power in shaping defenders'expectations of misuse and, in turn, the interventions that are prioritized. To better characterize the space of generative propaganda, we develop a taxonomy that distinguishes between obvious versus hidden and promotional versus derogatory use. Deception was neither the main driver nor the main impact vector of AI's use; instead, Indian creators sought to persuade rather than to deceive, often making AI's use obvious in order to reduce legal and reputational risks, while Taiwan's defenders saw deception as a subset of broader efforts to distort the prevalence of strategic narratives online. AI was useful and used, however, in producing efficiency gains in communicating across languages and modes, and in evading human and algorithmic detection. Security researchers should reconsider threat models to clearly differentiate deepfakes from promotional and obvious uses, to complement and bolster the social factors that constrain misuse by internal actors, and to counter efficiency gains globally.
abstract:When faced with unfolding protests, autocrats frequently respond with anti-protest propaganda loaded with negative narratives about protesters. Although a substantial body of literature has suggested that anti-protest propaganda can effectively alter the way the public views protests, few researchers have examined the mechanism through which propaganda negatively affects public support for protests. In this article, the authors explain the role that anti-protest propaganda plays in weakening public support for protests. Using an innovative experiment involving mediation analysis, the authors administered a survey to 950 Vietnamese respondents. The experimental results showed that anti-protest propaganda may deter support for protests more by influencing the audience's beliefs about the intention and capacity of the government than by shaping perceptions of the protesters' legitimacy. This evidence suggests that even when it fails at discrediting protesters, anti-protest propaganda still serves as an effective warning, credibly signaling the commitment and ability of the government to punish protesters and their supporters.
Propaganda is a form of persuasion that has been used throughout history with the intention goal of influencing people's opinions through rhetorical and psychological persuasion techniques for determined ends. Although Arabic ranked as the fourth most-used language on the internet, resources for propaganda detection in languages other than English, especially Arabic, remain extremely limited. To address this gap, the first Arabic dataset for Multi-label Propaganda, Sentiment, and Emotion (MultiProSE) has been introduced. MultiProSE is an open-source extension of the existing Arabic propaganda dataset, ArPro, with the addition of sentiment and emotion annotations for each text. This dataset comprises 8,000 annotated news articles, which is the largest propaganda dataset to date. For each task, several baselines have been developed using large language models (LLMs), such as GPT-4o-mini, and pre-trained language models (PLMs), including three BERT-based models. The dataset, annotation guidelines, and source code are all publicly released to facilitate future research and development in Arabic language models and contribute to a deeper understanding of how various opinion dimensions interact in news media1.
In today’s media landscape, propaganda distribution has a significant impact on society. It sows confusion, undermines democratic processes, and leads to increasingly difficult decision-making for news readers. We investigate the lasting effect on critical thinking and propaganda awareness on them when using a propaganda detection and contextualization tool. Building on inoculation theory, which suggests that preemptively exposing individuals to weakened forms of propaganda can improve their resilience against it, we integrate Kahneman’s dual-system theory to measure the tools’ impact on critical thinking. Through a two-phase online experiment, we measure the effect of several inoculation doses. Our findings show that while the tool increases critical thinking during its use, this increase vanishes without access to the tool. This indicates a single use of the tool does not create a lasting impact. We discuss the implications and propose possible approaches to improve the resilience against propaganda in the long-term.
False propaganda, as one of the unfair competition behaviors, seriously damages consumer rights and disrupts market order. This article aims to design a false propaganda detection technology based on LLM and BERT for dishonest behavior caused by false propaganda. By annotating false propaganda text data with LLM and training a false propaganda detection model based on BERT, the rapid development of the false propaganda detection model and effective cost savings in manual annotation have been achieved. The results show that this method has good performance and efficiency, and can meet practical engineering needs. It has certain application and reference value for model development in market supervision fields such as false propaganda.
No abstract available
The dissemination of disinformation has become a formidable weapon, with nation-states exploiting social media platforms to engineer narratives favorable to their geopolitical interests. This study delved into Russia’s orchestrated disinformation campaign, in three times periods of the 2022 Russian-Ukraine War: its incursion, its midpoint and the Ukrainian Kherson counteroffensive. This period is marked by a sophisticated blend of bot-driven strategies to mold online discourse. Utilizing a dataset derived from Twitter, the research examines how Russia leveraged automated agents to advance its political narrative, shedding light on the global implications of such digital warfare and the swift emergence of counter-narratives to thwart the disinformation campaign. This paper introduces a methodological framework that adopts a multiple-analysis model approach, initially harnessing unsupervised learning techniques, with TweetBERT for topic modeling, to dissect disinformation dissemination within the dataset. Utilizing Moral Foundation Theory and the BEND Framework, this paper dissects social-cyber interactions in maneuver warfare, thereby understanding the evolution of bot tactics employed by Russia and its counterparts within the Russian-Ukraine crisis. The findings highlight the instrumental role of bots in amplifying political narratives and manipulating public opinion, with distinct strategies in narrative and community maneuvers identified through the BEND framework. Moral Foundation Theory reveals how moral justifications were embedded in these narratives, showcasing the complexity of digital propaganda and its impact on public perception and geopolitical dynamics. The study shows how pro-Russian bots were used to foster a narrative of protection and necessity, thereby seeking to legitimize Russia’s actions in Ukraine whilst degrading both NATO and Ukraine’s actions. Simultaneously, the study explores the resilient counter-narratives of pro-Ukraine forces, revealing their strategic use of social media platforms to counteract Russian disinformation, foster global solidarity, and uphold democratic narratives. These efforts highlight the emerging role of social media as a digital battleground for narrative supremacy, where both sides leverage information warfare tactics to sway public opinion.
No abstract available
AI-powered influence operations can now be executed end-to-end on commodity hardware. We show that small language models produce coherent, persona-driven political messaging and can be evaluated automatically without human raters. Two behavioural findings emerge. First, persona-over-model: persona design explains behaviour more than model identity. Second, engagement as a stressor: when replies must counter-arguments, ideological adherence strengthens and the prevalence of extreme content increases. We demonstrate that fully automated influence-content production is within reach of both large and small actors. Consequently, defence should shift from restricting model access towards conversation-centric detection and disruption of campaigns and coordination infrastructure. Paradoxically, the very consistency that enables these operations also provides a detection signature.
In the past decade, social media platforms have been used for information dissemination and consumption. While a major portion of the content is posted to promote citizen journalism and public awareness, some content is posted to mislead users. Among different content types such as text, images, and videos, memes (text overlaid on images) are particularly prevalent and can serve as powerful vehicles for propaganda, hate, and humor. In the current literature, there have been efforts to individually detect such content in memes. However, the study of their intersection is very limited. In this study, we explore the intersection between propaganda and hate in memes using a multi-agent LLM-based approach. We extend the propagandistic meme dataset with coarse and fine-grained hate labels. Our finding suggests that there is an association between propaganda and hate in memes. We provide detailed experimental results that can serve as a baseline for future studies. We will make the experimental resources publicly available to the community (https://github.com/firojalam/propaganda-and-hateful-memes).
The proliferation of media channels as a result of the information age has ushered in a new era of communication and access to information. However, this increased accessibility has also opened up new avenues for propaganda and the manipulation of public opinion. With the recent release of OpenAI's artificial intelligence chatbot, ChatGPT, users and the media are increasingly discovering and reporting on its range of novel capabilities. The most notable of these, such as answering technical questions, stem from its ability to perform advanced natural language processing and text generation. In this paper, we aim to assess the feasibility of using the underlying technology behind ChatGPT, Large Language Models (LLMs), to detect features of propaganda in news articles. The features we consider leverage the work of Martino et al., who define a list of 18 distinct propaganda techniques. For example, they outline the 'straw man' technique, which refers to the use of 'refuting an argument that was not presented' [1]. Based on these techniques, we develop a refined prompt that is coupled with news articles from Russia Today (RT), a prominent state-controlled news network, and from the labelled SemEval-2020 Task 11 dataset [2]. The prompt and article content are then sent to OpenAI’s gpt-3.5-turbo model to determine which propaganda techniques are present and to make a final judgement on whether the article is propaganda or not. We then qualitatively analyse a subset of the resulting output to determine whether LLMs can be used effectively in this way. With the results of the study, we aim to uncover whether such technologies show promise in detecting propaganda, and what sort of prompts lead to the most useful output. This has the potential to be useful for media consumers, for example, who could use our prompts to detect signs of propaganda in the articles they read.
Since Lasswell, propaganda has been considered one of three chief implements of warfare, along with military and economic pressure. Russia’s invasion of Ukraine revives public and scholarly interest in war propaganda. The Russian political leader frames the war as an imperial war. The Ukrainian political leader frames it as a war of national liberation. The discursive battle thus complements the military combat. The outcome of the discursive combat depends on the effectiveness of propaganda deployed by the parties involved. Propaganda effectiveness is the propagation of war-related messages stated by political leaders through various media with no or few distortions. The effectiveness of propaganda is compared (1) across countries, with a particular focus on two belligerents, Russia and Ukraine, (2) in the function of the medium (mass media, digital media), and (iii) using two different methods (content analysis and survey research). Data were collected during the first year of the large-scale invasion (February 2022 to February 2023). Survey data allowed measuring the degree of the target audience’s agreement with key propagated messages.
‘Telling China’s Story Well’ as propaganda campaign slogan: International, domestic and the pandemic
The article critically examines ‘Telling China’s Story Well’ (TCSW), a popular propaganda campaign slogan proposed by Chinese President Xi Jinping in 2013. Drawing on theories about storytelling and propaganda and using the COVID-19 as a contextualised example, the paper discusses how the slogan was adapted into ‘Telling China’s Anti-pandemic Story Well’ to mobilise domestic and external propaganda of the Chinese Communist Party (CCP) during the pandemic. We argue that TCSW should be understood as a well-crafted political watchword which promotes and commands strategic narratives of doing propaganda. It has the rhetorical power to integrate and reinvigorate domestic and external propaganda, and to facilitate their convergence. Adapting this slogan to mobilise propaganda campaigns of national or global importance and interest demonstrates the CCP’s ambition to harness strategic storytelling to improve the coherence, effectiveness and reputation of its propaganda at home and abroad.
Propaganda plays a critical role in shaping public opinion and fueling disinformation. While existing research primarily focuses on identifying propaganda techniques, it lacks the ability to capture the broader motives and the impacts of such content. To address these challenges, we introduce propainsight, a conceptual framework grounded in foundational social science research, which systematically dissects propaganda into techniques, arousal appeals, and underlying intent. propainsight offers a more granular understanding of how propaganda operates across different contexts. Additionally, we present propagaze, a novel dataset that combines human-annotated data with high-quality synthetic data generated through a meticulously designed pipeline. Our experiments show that off-the-shelf LLMs struggle with propaganda analysis, but training with propagaze significantly improves performance. Fine-tuned Llama-7B-Chat achieves 203.4% higher text span IoU in technique identification and 66.2% higher BertScore in appeal analysis compared to 1-shot GPT-4-Turbo. Moreover, propagaze complements limited human-annotated data in data-sparse and cross-domain scenarios, showing its potential for comprehensive and generalizable propaganda analysis.
No abstract available
This paper examines Russia’s propaganda discourse on Twitter during the 2022 invasion of Ukraine. The study employs network analysis, natural language processing (NLP) techniques, and qualitative analysis to identify key communities and narratives associated with the prevalent and damaging narrative of “fascism/Nazism” in discussions related to the invasion. The paper implements a methodological pipeline to identify the main topics, and influential actors, as well as to examine the most impactful messages in spreading this disinformation narrative. Overall, this research contributes to the understanding of propaganda dissemination on social media platforms and provides insights into the narratives and communities involved in spreading disinformation during the invasion.
The proliferation of bias and propaganda onsocial media is an increasingly significant concern,leading to the development of techniquesfor automatic detection. This article presents amultilingual corpus of 12, 000 Facebook postsfully annotated for bias and propaganda. Thecorpus was created as part of the FigNews2024 Shared Task on News Media Narrativesfor framing the Israeli War on Gaza. It coversvarious events during the War from October7, 2023 to January 31, 2024. The corpuscomprises 12, 000 posts in five languages (Arabic,Hebrew, English, French, and Hindi), with2, 400 posts for each language. The annotationprocess involved 10 graduate students specializingin Law. The Inter-Annotator Agreement(IAA) was used to evaluate the annotationsof the corpus, with an average IAA of 80.8%for bias and 70.15% for propaganda annotations.Our team was ranked among the bestperformingteams in both Bias and Propagandasubtasks. The corpus is open-source and availableat https://sina.birzeit.edu/fada
This is a narrative literature review that explores how social media can be used to construct political images through the visual communication, synthesizing 52 peer-reviewed materials (2008-2025). The political communication of the modern times is dominated by visual content, which includes images, videos, memes and triggers 3.2 times more interactions than text among 5.2 billion users. Such platforms as Instagram (1.8-3.2% engagement), X (memes), Tik Tok (youth mobilization), and Facebook (reach) facilitate algorithm-enhanced, personal image creation. Semiotics (polysemous signs) and visual framing/image-bite politics (emotional encoding) and political branding/personalization theories are all integrated as theoretical backgrounds. Examples of case studies include India 2024 (Modi: [?]6.61Cr visual adverts, 100M+ followers), US (Fetterman authenticity, Trump spectacle) elections, citing platform specific strategies and traits of cross-cultural behaviours. There are positive impacts such as voter mobilization (43 per cent vote influence India youth) and parasocial bonding and heuristic decision-making. The challenges regarding the negative issues include visual misinformation (18% deepfakes), polarization (X: +34%), and algorithm echo chambers. India-Western differences emphasize the cultural symbol potency (85% vs 60%).
No abstract available
Based on a visual, verbal and aural quantitative and qualitative content analysis of the 62 execution videos produced by the Islamic State of Iraq and the Levant (ISIL) during its first year of existence (2014–2015), the aim of this research is to further the understanding of the inherent nature of the narratives spread by ISIL execution videos and to which audience(s) they are targeted. The authors adopt a bottom-up systematic approach of coding based on grounded theory to process visual and aural communication data as well as verbal communication of more than seven hours of ISIL hostage execution videos. In so doing, this research contributes to the understanding of multimodal communication interactions and the role of their discrepancies in framing fundamentalist ideologies. Moreover, the study adds perspective to previous research on Jihadist visual communication and audience studies. The results demonstrate how hostage execution videos discourse relies on ‘framing packages’ linked to values, norms and archetypes to create a recurrent and coherent organizational narrative aimed at segmenting ISIL’s audiences.
An empirical study of the visual political communication via Telegram channels during the political crisis in Belarus in 2020 is introduced. The method of qualitative and quantitative content analysis was applied to 625 images and videos retrieved from competing Belarusian Telegram channels during two weeks before and after the election day of August 9, 2020. The results revealed both the general characteristics of the visual political communication in Belarusian Telegram sector and the way opposing political forces used visual content in the period under analysis. The pro-government channels maintained a value-oriented approach, engaging visuals to broadcast Belarusian values and state power symbols. Their visual content involved prominent public and political figures. The opposition channels applied visuals to denounce the authorities and their supporters, as well as to mobilize the opposition-minded Telegram users. They employed a greater variety of visual forms, relying mostly on user-generated content, e.g., live reports from protest sites. Visuals with protesting crowds, casualties, and opposition symbols were aimed at evoking a sense of involvement: they were meant to become triggers for transiting from online activism to actual offline protests.
Digital systems for analyzing human communication data have become prevalent in recent years. This may be related to the increasing abundance of data that can be harnessed but can hardly be managed manually. Intelligence analysis of communications data in investigative journalism, criminal intelligence, and law present particularly interesting cases, as they must take into account the often highly sensitive properties of the underlying operations and data. At the same time, these are areas where increasingly automated, sophisticated approaches and tailored systems can be particularly useful and relevant, especially in terms of Big Data manageability. However, by the shifting of responsibilities, this also poses dangers. In addition to privacy concerns, these dangers relate to uncertain or poor data quality, leading to discrimination and potentially misleading insights. Other problems relate to a lack of transparency and traceability, making it difficult to accurately identify problems and determine appropriate remedial strategies. Visual analytics combines machine learning methods with interactive visual interfaces to enable human sense- and decision-making. This technique can be key for designing and operating meaningful interactive communication analysis systems that consider these ethical challenges. In this interdisciplinary work, a joint endeavor of computer scientists, ethicists, and scholars in Science & Technology Studies, we investigate and evaluate opportunities and risks involved in using Visual analytics approaches for communication analysis in intelligence applications in particular. We introduce, at first, the common technological systems used in communication analysis, with a special focus on intelligence analysis in criminal investigations, further discussing the domain-specific ethical implications, tensions, and risks involved. We then make the case of how tailored Visual Analytics approaches may reduce and mitigate the described problems, both theoretically and through practical examples. Offering interactive analysis capabilities and what-if explorations while facilitating guidance, provenance generation, and bias awareness (through nudges, for example) can improve analysts’ understanding of their data, increasing trustworthiness, accountability, and generating knowledge. We show that finding Visual Analytics design solutions for ethical issues is not a mere optimization task with an ideal final solution. Design solutions for specific ethical problems (e.g., privacy) often trigger new ethical issues (e.g., accountability) in other areas. Balancing out and negotiating these trade-offs has, as we argue, to be an integral aspect of the system design process from the outset. Finally, our work identifies existing gaps and highlights research opportunities, further describing how our results can be transferred to other domains. With this contribution, we aim at informing more ethically-aware approaches to communication analysis in intelligence operations.
Politicians’ reticence to communicate their views clearly increases the information asymmetry between them and the electorate. This study tested the potential of subtle ideological cues to redress the balance. By spotlighting visual rather than the already much-examined verbal cues, we sought to contribute to building theory on cue effects. Specifically, we aimed to determine whether the effects from the literature on verbal cues could also be shown for visual ones. We used an experiment (N = 361) to test the effects of subtle backdrop cues (SBCs), that is, of visual cues to ideology embedded in the background of political images. We manipulated photos of a fictitious politician to include liberal or conservative SBCs. We embedded these images in Twitter posts and tested whether they influenced perceptions of the politician’s ideology and the intention to vote for him. We analyzed the relationship between exposure to SBCs, the politician’s perceived political ideology, and voting intention—including the study of conditional effects elicited by cue awareness and ideological consistency between the depicted politician and participant. The conditional process analysis suggested that SBCs mattered, as they influenced citizens’ perceptions of a politician’s political ideology, and consequently, voting intention. These effects were moderated by cue awareness and ideological consistency. We concluded that SBCs can elicit substantial effects and that their use by politicians is paying off.
No abstract available
ABSTRACT Even though misinformation, disinformation, and fake news are not new phenomena, they have received renewed interest since political events such as Brexit and the 2016 U.S. Presidential elections. The resulting sharp increase in scholarly publications bears the risk of lack of overview, fragmentation across disciplines, and ultimately a lack of research cumulativity. To counteract these risks, we have performed a systematic research review of 1261 journal articles published between 2010 and 2021. Results show the field is mostly data-driven, frequently investigating the prevalence, dissemination, detection or characteristics of misinformation, disinformation, and fake news. There further are clear foci concerning contributing disciplines, methodologies, and data usage. Building on our results, we identify several research gaps and suggest avenues for future research.
This research examines the spread of disinformation on social media platforms and its impact on state resilience through a systematic literature review of 150 peer-reviewed studies published between 2014 and 2024. The analysis revealed that disinformation spreads six times faster than accurate information, with emotions and platform algorithms playing a significant role in its spread. Factors such as low digital literacy, political polarization, and declining trust in institutions increase people’s vulnerability to disinformation. Impacts on national security include threats to the integrity of democratic processes, the erosion of social cohesion, and decreased public trust. The most effective coping strategies include improving digital literacy (78 percent effective), fact-checking (65 percent), and content regulation (59 percent). However, these efforts face ethical and legal challenges, especially regarding freedom of expression. This research highlights the need for a multidimensional approach in addressing the “information pandemic”, integrating technological, educational, and policy strategies while considering ethical implications. The findings provide a foundation for further policy development and research to protect the integrity of public information spaces and state resilience in the digital age.
In recent years, the world has witnessed a global outbreak of fake news, propaganda and disinformation (FNPD) flows on online social networks (OSN). In the context of information warfare and the capabilities of generative AI, FNPDs have proliferated. They have become a powerful and quite effective tool for influencing people’s social identities, attitudes, opinions and even behavior. Ad hoc malicious social media accounts and organized networks of trolls and bots target countries, societies, social groups, political campaigns and individuals. As a result, conspiracy theories, echo chambers, filter bubbles and other processes of fragmentation and marginalization are polarizing, radicalizing, and disintegrating society in terms of coherent politics, governance, and social networks of trust and cooperation. This systematic review aims to explore advances in using machine and deep learning to detect FNPD in OSNs effectively. We present the results of a combined PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) review in three analysis domains: 1) propagators (authors, trolls, and bots), 2) textual content, 3) social impact. This systemic research framework integrates meta-analyses of three research domains, providing an overview of the wider research field and revealing important relationships between these research domains. It not only addresses the most promising ML/DL research methodologies and hybrid approaches in each domain, but also provides perspectives and insights on future research directions.
Scientific disinformation has emerged as a critical challenge at the interface of science and society. This paper examines how false or misleading scientific content proliferates across both social media and traditional media and evaluates strategies to counteract its spread. We conducted a comprehensive literature review of research on scientific misinformation across disciplines and regions, with particular focus on climate change and public health as exemplars. Our findings indicate that social media algorithms and user dynamics can amplify false scientific claims, as seen in case studies of viral misinformation campaigns on vaccines and climate change. Traditional media, meanwhile, are not immune to spreading inaccuracies—journalistic practices such as sensationalism or “false balance” in reporting have at times distorted scientific facts, impacting public understanding. We review efforts to fight disinformation, including technological tools for detection, the application of inoculation theory and prebunking techniques, and collaborative approaches that bridge scientists and journalists. To empower individuals, we propose practical guidelines for critically evaluating scientific information sources and emphasize the importance of digital and scientific literacy. Finally, we discuss methods to quantify the prevalence and impact of scientific disinformation—ranging from social network analysis to surveys of public belief—and compare trends across regions and scientific domains. Our results underscore that combating scientific disinformation requires an interdisciplinary, multi-pronged approach, combining improvements in science communication, education, and policy. We conducted a scoping review of 85 open-access studies focused on climate-related misinformation and disinformation, selected through a systematic screening process based on PRISMA criteria. This approach was chosen to address the lack of comprehensive mappings that synthesize key themes and identify research gaps in this fast-growing field. The analysis classified the literature into 17 thematic clusters, highlighting key trends, gaps, and emerging challenges in the field. Our results reveal a strong dominance of studies centered on social media amplification, political denialism, and cognitive inoculation strategies, while underlining a lack of research on fact-checking mechanisms and non-Western contexts. We conclude with recommendations for strengthening the resilience of both the public and information ecosystems against the spread of false scientific claims.
This paper explores the European Union's multifaceted response to the pervasive issue of disinformation, a challenge that has intensified since the annexation of Crimea in 2014. Disinformation poses significant threats to democratic processes and public welfare. The European Union's approach combines regulatory measures, strategic partnerships, and media literacy initiatives to address this phenomenon while safeguarding core democratic principles, such as freedom of expression. Key measures include the Code of Practice on Disinformation and the Digital Services Act, which aim to hold digital platforms accountable and ensure transparency. Furthermore, initiatives such as the East StratCom Task Force and the Rapid Alert System highlight the European Union's efforts to counter disinformation as a tool of hybrid warfare. This paper also emphasizes the critical role of citizens, whom the European Union seeks to empower through media literacy programs, enabling them to recognize and resist manipulative content. By examining the interactions between government actions, private sector involvement, and citizen engagement, this study provides a comprehensive analysis of the European Union's strategy against disinformation and assesses the challenges and future directions necessary to sustain democratic resilience in an evolving digital landscape. Key Points for Practitioners comprehensive analysis of the EU's strategy against disinformation; effective instruments aim to hold digital platforms accountable and ensure transparency; challenges and future directions necessary to sustain democratic resilience in an evolving digital landscape.
ABSTRACT The COVID-19 pandemic laid bare the unpreparedness of global and public health systems to respond to large-scale health crises, while simultaneously revealing the entangled nature of disinformation and poor global and public health outcomes. This research challenges the common treatment of public health disinformation – deliberately false information – as an emergent and technical threat, and instead situates it as a more systemic and nuanced challenge for global health governance to address. This article presents an integrative narrative literature review on the interlinkages between public health disinformation, conflict, and disease outbreaks, demonstrating mutually influencing connections between them. In doing so, the analysis raises critical questions around how reactive responses, such as doubling down on information authority, can paradoxically fuel the uptake of both disinformation especially amidst global trends towards increasing conflict and decreasing cooperation. In this evolving sociopolitical landscape for global health, the discussion explores the potential to harness health diplomacy to strengthen critical public engagement and deliberation. This reimagined approach to health diplomacy offers pathways to mitigate the harmful effects of disinformation rather than seeking to eliminate false information. This article contributes to deepening an understanding of this rapidly expanding topic for global and public health in two pathways. First, by investigating the root causes and impacts of public health disinformation that intersect with conflict. Second, by exploring how health diplomacy can foster cooperative global health governance through transparency and inclusion. This research offers a new direction to strengthen preparedness for future global and public health crises amidst disinformation. Paper Context Main findings: This review demonstrated that disinformation has long been used as a tool to advance political goals, and public health is one of the most relevant arenas where actors apply strategies to disrupt and destabilise. Measures to combat public health disinformation that overlook sociopolitical dimensions may paradoxically fuel this systemic challenge to global health. Added knowledge: This article contributes to deepening an understanding of this rapidly expanding topic for global and public health in two key ways. First, by investigating the root causes and impacts of public health disinformation that intersect with conflict. Second, by exploring how health diplomacy can foster cooperative global and public health governance through transparency and inclusion. Global health impact for policy and action: The article explored health diplomacy and how it could be leveraged to mitigate the impacts of disinformation – namely, the breakdown of systems of information and trust – through improving mechanisms for transparency and inclusion in global and public health governance.
Abstract The rise of AI-driven technologies has created new avenues for disinformation and poses a significant security threat. This study uses the Delphi method to analyse the emerging training needs and skills required to combat AI-driven disinformation. The experts emphasise that the public’s and policy makers’ understanding of AI-driven disinformation is still very fragmentary. They highlight the need for improved digital competences and skills, updated educational frameworks and continuous training through micro-credentials. In addition, the experts point out the importance of integrating non-technological skills such as geopolitical awareness and historical analysis into the digital skills framework. The findings emphasise the importance of a comprehensive, interdisciplinary approach to learning that incorporates both technical and critical thinking skills to counter the evolving landscape of disinformation. The study recommends proactive measures, including the development of early detection systems for new technologies and the implementation of flexible, modular learning opportunities, to ensure that professionals and citizens can adapt to emerging threats.
The COVID-19 pandemic has highlighted the powerful influence of social media in shaping public opinion, particularly in spreading vaccine misinformation. This study investigates the dynamics of disinformation during the pandemic, focusing on its unprecedented scale and rapid proliferation, as well as its implications for political stability and public trust. It emphasizes the crucial role of Information Systems (IS) in addressing these challenges, utilizing advanced technologies such as algorithms, data analytics, and artificial intelligence for real-time tracking, analysis, and countermeasures against misinformation. The study employs a data analytics approach to analyze over 2 million vaccine-related tweets, classifying them as misinformation and reliable information. The findings reveal that misinformation spikes often coincide with periods of public uncertainty, while central nodes—highly connected social media users—play a key role in amplifying misinformation and reliable content. Despite the challenges, users tend to engage more with trustworthy information, offering opportunities to amplify factual content. By examining the influence of disinformation cascades and reviewing existing counter strategies, this study aims to illuminate the complexities of managing misinformation in today’s digital landscape. Furthermore, it underscores the need for an integrated, multidisciplinary approach that combines insights from Political Science and IS. This approach fosters media literacy, enhances transparency, and promotes trust in credible sources, thereby strengthening the resilience of information ecosystems against disinformation threats.
ABSTRACT The rise of political disinformation poses a substantial threat to democratic societies, with the potential to erode well-informed decision-making and hinder effective policy formulation. While existing research typically concentrates on experimental exposures to fact-checking, a significant research gap remains regarding how the integration of fact-checking into daily news consumption routines contributes to self-perceived levels of disinformation identification. This study addresses this gap by examining the role of various media consumption platforms in facilitating fact-checking practices and their potential to mitigate misperceptions. Findings from a two-wave panel survey in Spain (N = 570) suggest that news consumption alone does not directly improve self-perceived levels of disinformation identification. Instead, it indirectly facilitates recognition by encouraging the adoption of fact-checking practices. The study concludes that citizens’ self-perceived ability to identify disinformation is enhanced when news consumption across platforms is complemented by fact-checking practices.
With the United States being the target of sustained disinformation campaigns from its adversaries in the last few years, it is obvious that national security and policy formulation have been saddled with a different kind of challenge- one that attacks the fabric of the U.S. society by directly influencing perceptions and actions of Americans. This review highlights the different information terminology, traces the beginnings of disinformation, and its implications on national institutions and national security. This paper examines disinformation within the context of Information Warfare and Cognitive Warfare. Policymaking to counter disinformation must be deliberate and sustained, involving all stakeholders in American society, and avoid overconcentration on foreign actors while internal elements cause deep divisions and continue to alienate segments of the society. This balanced focus on internal and external threats will enhance the United States’ chances of winning this war. Keywords: Disinformation, National Security, Information Warfare, Cognitive Warfare
The essence of this publication is an attempt to characterize the term of fake news in the social media environment. Social media platforms are commercial entities that are difficult to control and focused on user-generated content. Therefore, they constitute an effective space for spreading dis-information, both intentional and unintentional. The definitions of fake news, their types and exam-ples were analyzed, and a case study was conducted in relation to two important events – both geo-politically and media-wise – the war in Ukraine and the presidential elections in the USA. The last chapter is a proposal of activities that will help you differentiate truth from false information and develop your own media competencies.
Fake news (i.e., false news created to have a high capacity for dissemination and malicious intentions) is a problem of great interest to society today since it has achieved unprecedented political, economic, and social impacts. Taking advantage of modern digital communication and information technologies, they are widely propagated through social media, being their use intentional and challenging to identify. In order to mitigate the damage caused by fake news, researchers have been seeking the development of automated mechanisms to detect them, such as algorithms based on machine learning as well as the datasets employed in this development. This research aims to analyze the machine learning algorithms and datasets used in training to identify fake news published in the literature. It is exploratory research with a qualitative approach, which uses a research protocol to identify studies with the intention of analyzing them. As a result, we have the algorithms Stacking Method, Bidirectional Recurrent Neural Network (BiRNN), and Convolutional Neural Network (CNN), with 99.9%, 99.8%, and 99.8% accuracy, respectively. Although this accuracy is expressive, most of the research employed datasets in controlled environments (e.g., Kaggle) or without information updated in real-time (from social networks). Still, only a few studies have been applied in social network environments, where the most significant dissemination of disinformation occurs nowadays. Kaggle was the platform identified with the most frequently used datasets, being succeeded by Weibo, FNC-1, COVID-19 Fake News, and Twitter. For future research, studies should be carried out in addition to news about politics, the area that was the primary motivator for the growth of research from 2017, and the use of hybrid methods for identifying fake news.
The COVID-19 pandemic has impacted on every human activity and, because of the urgency of finding the proper responses to such an unprecedented emergency, it generated a diffused societal debate. The online version of this discussion was not exempted by the presence of misinformation campaigns, but, differently from what already witnessed in other debates, the COVID-19 -intentional or not- flow of false information put at severe risk the public health, possibly reducing the efficacy of government countermeasures. In this manuscript, we study the effective impact of misinformation in the Italian societal debate on Twitter during the pandemic, focusing on the various discursive communities. In order to extract such communities, we start by focusing on verified users, i.e., accounts whose identity is officially certified by Twitter. We start by considering each couple of verified users and count how many unverified ones interacted with both of them via tweets or retweets: if this number is statically significant, i.e. so great that it cannot be explained only by their activity on the online social network, we can consider the two verified accounts as similar and put a link connecting them in a monopartite network of verified users. The discursive communities can then be found by running a community detection algorithm on this network. We observe that, despite being a mostly scientific subject, the COVID-19 discussion shows a clear division in what results to be different political groups. We filter the network of retweets from random noise and check the presence of messages displaying URLs. By using the well known browser extension NewsGuard, we assess the trustworthiness of the most recurrent news sites, among those tweeted by the political groups. The impact of low reputable posts reaches the 22.1% in the right and center-right wing community and its contribution is even stronger in absolute numbers, due to the activity of this group: 96% of all non reputable URLs shared by political groups come from this community.
This paper has attempted to theorize about the phenomenon of fake news creation and dissemination in the modern media space. The concept of "fake news" is defined as "the intentional presentation of false or misleading information as news reports, manipulating users' biases and heuristics". The problem of misinformation and information manipulation is quite global, and it is also relevant for Kazakhstan. In today's world, there is a decline in trust in key institutions of society, including the media. For instance, content emerges in social media without editorial expertise, so false claims can spread much further, faster and with greater consequences than reliable news. The authors conducted a systematic review of the literature on the phenomenon of fake news based on scientific evidence from reputable sources. Previous research has been based on an empirical methodological approach to the study of fake news and its impact on user behavior, but the systematic review method expands current knowledge: it covers a wide range of disciplines studying fake news, highlighting the growing interest in the topic; reveals unique characteristics underlying fake news that can be applied to identify them; and summarizes questions and suggestions arising from the theoretical framework. The phenomenon of fake news is a new area of research in the field of journalism, business and marketing, in view of which this study makes a large contribution to the theoretical development of the above-mentioned problem
Abstract In this article we review research from the past decade that explores how elements of communication from social media and press articles influence the decision making for choosing a travel destination. ‘Fake news’ has the potential to impact opinions, expectations and behaviour of tourism consumers. Perceived as an important threat to modern democratic societies, the course of intentional false data dissemination is able to disrupt perception and throughout the normal functioning of state institutions and private companies. Hence, manipulation of information shapes differently the image of tourism destinations, accommodation units, cruise ships and even tourist attractions mostly in order to produce higher economic benefits. Unfortunately, sometimes ‘fake news’ spreading could be detrimental to tourist destinations and operators. In order to pursue, cope, absorb and adjust threats related to ‘fake news’, we will use and approach in a later work the aspects regarding a ‘societal resilience’
News audiences on social media succumb to filtering systems to navigate the overabundance of information. However, filtering systems get bolstered by echo chambers, increasing social media polarization, especially when false information hinders better-informed viewpoints. Reticence, though understudied, has the ability to hamper the spread of factual information. Hence, this study aims to investigate why social media users showcase reticence toward publicly correcting false information on their feeds, and how this disposition can affect ideological polarization. Eight interviews were conducted through criterion-based and referral sampling methods, and the resulting transcripts were analyzed through a combination of inductive and deductive approaches. Findings showed that reticence is driven by three interrelated factors: relational, proximal, and cognitive and emotional. This study contributes to the almost-forgotten research theme of reticence in communication and journalism studies, showing why such behavior and its considerations inadvertently contribute to polarization on social media.
With the rapid development of technology and wide exchange of information, false information or fake news can easily be spread. The challenge not only comes from content that is easily created and manipulated using AI technology such as the Deep fake algorithm, but the presence of social media can spread misinformation throughout the world in a matter of seconds. There are various motivations for creating fake news, such as economic gain and politics. This paper aims to analyze current research on false information using bibliometric methods. Trends in research, as well as links between studies, are evaluated in depth. Emerging research themes is also elaborated. Based on the analysis, the trend on the theme of fake information is currently increasing. Various research are published to understand the phenomenon and to find the potential solutions. Generally two research streams are emerged in this topic. The first stream is focused on the human factors such as educating society to be more critical in responding to the information, and studying human behavior towards the fake news. The second stream is focused on how technology able to help to tackle fake news. Several technologies are often mentioned, such as blockchain to store and trace the spread of fake news, artificial intelligence to categotize true and false information automatically, and social network analysis to analyze chain and distribution of fake news. Based on the literature study, a framework of solution built that combined both human and technological measure to combate the fake news.
The rapid spread of false information and persistent manipulation attacks on online social networks (OSNs), often for political, ideological, or financial gain, has affected the openness of OSNs. While researchers from various disciplines have investigated different manipulation-triggering elements of OSNs (such as understanding information diffusion on OSNs or detecting automated behavior of accounts), these works have not been consolidated to present a comprehensive overview of the interconnections among these elements. Notably, user psychology, the prevalence of bots, and their tactics concerning false information detection have been overlooked in previous research. To address this research gap, this paper synthesizes insights from various disciplines to provide a comprehensive analysis of the manipulation landscape. By integrating the primary elements of social media manipulation (SMM), including false information, bots, and malicious campaigns, we extensively examine each SMM element. Through a systematic investigation of prior research, we identify commonalities, highlight existing gaps, and extract valuable insights in the field. Our findings underscore the urgent need for interdisciplinary research to effectively combat social media manipulations, and our systematization can guide future research efforts and assist OSN providers in ensuring the safety and integrity of their platforms.
No abstract available
The recent serious cases of spreading false information have posed a significant threat to the social stability and even national security, urgently requiring all circles to respond adequately. Therefore, this survey illustrates how to fight against false information from its propagation process by (1) exploring the drivers of information infectivity from the content, media, user, structural, and temporal dimensions; (2) describing the propagation modeling approaches from macro (global), meso (community), and micro (individual) levels; and (3) discussing the governance strategies from both technical and application aspects. The potential data sources and the future directions of fighting are also given, hoping to facilitate more comprehensive solutions.
Transit ridership had been decreasing in major cities across the United States prior to the Covid-19 pandemic, with Seattle as a notable exception. I examine the relationship between travel behavior and Seattle’s land use planning program in conjunction with transit improvements. I use econometric methods to analyze multiple waves of the Puget Sound Regional Council (PSRC) Household Travel Survey from 2014 to 2021. Living in one of Seattle’s Urban Villages is significantly associated with a higher likelihood of taking transit. This relationship holds during the pandemic time period and when controlling for self-selection.
The article presents a comprehensive analysis of digital transformation in Kyrgyzstan, covering key areas of social development. The study is based on an interdisciplinary approach that examines digitalization processes in the economy, legal system, education, politics and social sphere. The main attention is paid to three fundamental aspects of digital transformation: development of digital resources (e-business and commerce), creation of a digital business environment and formation of digital competence of the population. Particular emphasis is placed on the need for coordinated interaction between the state, business and society for the successful implementation of digital transformations. The example of the Tunduk platform demonstrates the practical achievements of Kyrgyzstan in the field of digitalization of public services. The author analyzes in detail the technological architecture of the system, its functionality and the results achieved, including a 5-7-fold reduction in service delivery times. Particular attention is paid to the process of information socialization of young people in the context of digitalization, as well as the transformation of traditional institutions (education, justice, politics) under the influence of digital technologies.
Pandemic response strategies have traditionally relied on classical epidemiological models such as SIR and SEIR, which primarily focus on the biological transmission of infectious diseases. However, these models often overlook the significant influence of public behavior, trust in science, and the rapid dissemination of misinformation. This paper proposes an integrated conceptual framework that bridges these gaps by combining epidemic modeling with behavioral and informational dynamics in what is termed a "Dual-Spread Model." Through a synthesis of literature, historical examples (COVID-19, H1N1, Ebola), and illustrative diagrams, the study reveals how misinformation, public trust, and community responses can either amplify or suppress disease spread. The framework emphasizes feedback loops between disease outcomes, information flows, and behavioral responses, offering practical insights for policymakers. Key policy recommendations include behavior-informed vaccination campaigns, targeted communication strategies, and coordinated efforts between public health institutions and information platforms. This interdisciplinary approach provides a more robust and adaptive tool for future pandemic preparedness and response.
The intersection of religion and electoral behavior in the Philippines continues to raise concerns about voter autonomy, particularly among youth voters. This research examined how collective voting practices influence the political independence of university students affiliated with various religious groups. Using a descriptive-comparative research design, a structured questionnaire was administered to 360 registered student voters from different academic departments and religious affiliations within a diverse university community. The study assessed students’ alignment with legal provisions on suffrage, their behaviors toward coordinated voting behaviors, and the degree of perceived influence exerted by religious and social groups. Quantitative data were analyzed using t-tests and ANOVA to explore variations in perception based on academic background, sex, and religious affiliation. Findings revealed that participants strongly support constitutional provisions on suffrage and reject electoral misconduct. Participants generally disagreed with the practice of bloc voting but were perceived as moderately influential, with respondents acknowledging its effect on electoral outcomes and the social pressure it creates. No significant differences were found in perceptions based on academic program or sex; however, religious affiliation showed a notable impact on perceived influence. The findings revealed a clear tension between upholding community unity and exercising personal choice. In response, the study proposed an educational initiative that encourages reflective decision-making and enhances awareness of electoral rights. The research offers valuable insight into how institutional, cultural, and spiritual contexts shape student voters’ behavior, contributing to broader discussions on promoting informed and autonomous participation in democratic processes.
Fake news has now grown into a big problem for societies and also a major challenge for people fighting disinformation This phenomenon plagues democratic elections, reputations of individual persons or organizations, and has negatively impacted citizens, (e g , during the COVID-19 pandemic in the US or Brazil) Hence, developing effective tools to fight this phenomenon by employing advanced Machine Learning (ML) methods poses a significant challenge The following paper displays the present body of knowledge on the application of such intelligent tools in the fight against disinformation It starts by showing the historical perspective and the current role of fake news in the information war Proposed solutions based solely on the work of experts are analysed and the most important directions of the application of intelligent systems in the detection of misinformation sources are pointed out Additionally, the paper presents some useful resources (mainly datasets useful when assessing ML solutions for fake news detection) and provides a short overview of the most important R&D projects related to this subject The main purpose of this work is to analyse the current state of knowledge in detecting fake news;on the one hand to show possible solutions, and on the other hand to identify the main challenges and methodological gaps to motivate future research
Medical disinformation poses a serious threat to medical demographic security by distorting health behavior at scale, often amplified by confusion among individuals seeking reliable medical information across diverse topics. These distortions can increase vaccine hesitancy, encourage unproven or harmful practices such as ingesting bleach as a purported COVID-19 treatment, and delay evidence-based care. Medical disinformation also erodes trust in health institutions and contributes to cumulative harms, including increased morbidity and mortality, widening health disparities, and, in some cases, real-world violence linked to conspiracy narratives. Despite rapid advances in automated detection methods, the evidence base remains fragmented, obscuring dominant approaches, required resources, and critical research gaps. This paper presents a systematic review of medical disinformation detection research. Major modeling paradigms and reported evaluation evidence are synthesized, encompassing traditional machine learning, deep learning and transformer-based models, knowledge graph approaches, and fact-checking pipelines, together with the datasets and medical knowledge resources that support them. Commonly used feature types are categorized, their strengths and limitations are assessed, persistent weaknesses in resources and detection pipelines are identified, and targeted recommendations are offered to improve future systems and support more reliable medical informatics that strengthens medical demographic security.
Over the past couple of years, the topic of "fake news" and its influence over people's opinions has become a growing cause for concern. Although the spread of disinformation on the Internet is not a new phenomenon, the widespread use of social media has exacerbated its effects, providing more channels for dissemination and the potential to "go viral." Nowhere was this more evident than during the 2016 United States Presidential Election. Although the current of disinformation spread via trolls, bots, and hyperpartisan media outlets likely reinforced existing biases rather than sway undecided voters, the effects of this deluge of disinformation are by no means trivial. The consequences range in severity from an overall distrust in news media, to an ill-informed citizenry, and in extreme cases, provocation of violent action. It is clear that human ability to discern lies from truth is flawed at best. As such, greater attention has been given towards applying machine learning approaches to detect deliberately deceptive news articles. This paper looks at the work that has already been done in this area.
Detection of Disinformation on Social Platforms: A Review of Computational Approaches and Challenges
The rapid spread of disinformation through social platforms poses a serious threat to public trust, democratic stability, and public welfare. Researchers have proposed various computational methods for fake news detection, ranging from traditional machine learning methods to advanced deep learning models and hybrid models. This review systematically analyzes 83 peer-reviewed studies, classifying them by methodological approach, feature types, data sources, and detection strategies. The paper highlights notable advances in transformer-based models, multimodal systems integrating text and images, and graph-based spread detection. It also addresses key challenges in the field, such as the lack of multilingual datasets, explainability, and resilience to adversarial content. By identifying trends, gaps, and future directions, this review provides a comprehensive framework to advance the development of robust and transparent fake news detection systems.
This experimental study analyzes the effect of media literacy on the ability of Spanish seniors over 50 years of age to identify fake news. The experiment measures the improvement achieved by older adults in the detection of political disinformation thanks to a digital competence course offered through WhatsApp. The study comprises a total sample of 1,029 individuals, subdivided into a control group (n = 531) and an experimental group (n = 498), from which a qualified experimental subsample (n = 87) was extracted. Results reveal that participants’ political beliefs, ranging from left to right positions, influence their ability to detect misinformation. A progressive political position is associated with higher accuracy in identifying right-biased news headlines and lower accuracy for left-biased headlines. A conservative position is associated with higher accuracy when the news headline has a progressive bias, but lower accuracy when the headline is right-wing. Users are more critical when the headline has a bias against theirs, while they are more likely to believe news that confirms their own beliefs. The study adds evidence on the relevance of cognitive biases in disinformation and supports the convenience of designing specific media literacy actions aimed at older adults.
With the continuous spread of the COVID-19 pandemic, misinformation poses serious threats and concerns. COVID-19-related misinformation integrates a mixture of health aspects along with news and political misinformation. This mixture complicates the ability to judge whether a claim related to COVID-19 is information, misinformation, or disinformation. With no standard terminology in information and disinformation, integrating different datasets and using existing classification models can be impractical. To deal with these issues, we aggregated several COVID-19 misinformation datasets and compared differences between learning models from individual datasets versus one that was aggregated. We also evaluated the impact of using several word- and sentence-embedding models and transformers on the performance of classification models. We observed that whereas word-embedding models showed improvements in all evaluated classification models, the improvement level varied among the different classifiers. Although our work was focused on COVID-19 misinformation detection, a similar approach can be applied to myriad other topics, such as the recent Russian invasion of Ukraine.
Deepfake technology, a product of sophisticated artificial intelligence and machine learning algorithms, has profoundly altered the landscape of digital media. Its emergence is characterized by a fundamental duality: it presents as both a groundbreaking technological innovation and a potent societal threat. This review paper provides a comprehensive analysis of this complex technology, delving into its core generative mechanisms, its wide-ranging applications, and the significant challenges it poses to cybersecurity, ethics, and democratic institutions. The analysis explores the ethical and beneficial uses of deepfakes in sectors such as healthcare, education, and entertainment, while simultaneously detailing their malicious applications in financial fraud, political disinformation, and the creation of non-consensual explicit content. A critical examination of the ongoing "arms race" between deepfake generation and detection reveals the inherent difficulties in developing effective countermeasures, exacerbated by a fundamental asymmetry in the cost and speed of creation versus detection. The paper further scrutinizes the limitations of existing legal frameworks and the nascent, fragmented global regulatory responses. This study concludes that while deepfakes offer genuine promise as a creative tool, their current and most widespread use as a weapon for deception and manipulation positions them as an urgent and systemic threat to verifiable reality and public trust.
Australian debates about how to regulate deepfake video have, to date, largely been shaped by STEM agendas for generative artificial intelligence (AI) policy and public fears about disinformation intensification. As the federal government consults on AI regulation, this article aims to move policymakers’ focus beyond the deepfake ‘problem’ to investigate the implications of generative AI screen technologies from two creative industries perspectives. First, it establishes the negative and positive uses of deepfake applications in media, politics, commerce, education, film and art. Second, it compares the forms and scope of emerging international deepfake regulations with those proposed in Australia to conceptualise the impact that restrictions on deepfakes might have for domestic screen producers, flagging the potential closure of artistic and public expression. In doing so, we highlight that a STEM-focused approach to deepfake regulation is insufficiently attuned to the benefits of synthetic media applications in the post-truth AI communications economy.
This study investigates the transformative role of artificial intelligence (AI) in state-sponsored cyber espionage, focusing on its dual use in offensive and defensive operations. Using data from the MITRE ATT&CK Framework, FireEye APT Groups Database, UNSW-NB15 Intrusion Detection Dataset, and the Cyber Conflict Tracker by CFR, this research applied network graph analysis, multi-criteria decision analysis (MCDA), ensemble classification models, and Difference-in-Differences (DiD) analysis. Results revealed that AI-driven offensive techniques, phishing (degree centrality 0.85), and adaptive malware (betweenness centrality 0.81) significantly enhance operational precision and scalability. Defensively, ensemble classification models achieved up to 95.8% accuracy, highlighting AI's efficacy in intrusion detection. AI regulatory frameworks reduced misattribution rates by 20% and escalation incidents by 10%, demonstrating their critical role in mitigating geopolitical risks. The findings impress AI's transformative potential in advancing cyber operations and shaping international policy and governance. By addressing challenges such as attribution, escalation risks, and ethical dilemmas, this study highlights the necessity for stronger global cooperation and regulatory frameworks to navigate the dual-use nature of AI, providing actionable insights for policymakers, cybersecurity professionals, and researchers, emphasizing the urgency of aligning technological advancements with strategies for enhancing global cybersecurity resilience.
Technological advancements in information and communications technologies and related hardware and software have positively transformed the political, military, economic and social domains in all countries around the globe. These technologies are imperfect, and States and state-sponsored threat actors are exploiting flaws in hardware and software for various types of attacks. Furthermore, the same threat actors exploit software technologies to spread disinformation and disseminate false information to mislead public opinion. This research article reviews the discourse of the scientific community on disinformation. The purpose is to understand where the research focus lies and who the researchers are the co-authors, and the publication venues. This research article reviews the scientific literature using the computational literature review, a semi-automated review method and the structural topical modelling framework to understand trends in the research. Of 3 097 documents published in 1 700 publication venues between 1974 to 2022, 704 were analysed. The results reveal 46 topics on issues such as rumours and disinformation spread during the Covid-19 pandemic, Soviet and Russian Information Warfare, and Trolls and health-related themes and effects.
The social media seems to have increased the velocity of spreading falsehoods and misinformation, being even more influential around electioneering times. This study looks at how low media literacy has worsened the spread of misinformation in the 2023 general elections in Plateau State, Nigeria. Using a mixed methods research design, the study attempts to measure media literacy across different demographics and its effects on voter perception and decision-making using both qualitative and quantitative approaches. The outcomes suggest that the low levels of media literacy predispose individuals to manipulation from that misinformation into electoral choices, widening political polarisation and destroying the integrity of democracy. This study identifies the main sources and channels for the spread of misinformation across social media sites, political campaigning, and some traditional media. This study focuses on the furtherance of false narratives and the curtailment of critical engagement with factual information through algorithmic content, echo chambers, and cognitive biases. The study suggests targeted media literacy campaigns informing voters how to identify credible sources, policy regulations on disinformation spread, and technological approaches such as AI fact-checking systems to identify and flag misleading content as possible ways out of the problem. Enhancing partnership between government agencies, civil society organizations, and digital platforms was noted as a significant step toward combating misinformation and creating an informed electorate. This study provides an overview to the general subject of information disorder and emphasizes the urgent need to make media literacy an anchor against manipulative activities of the opinion in any democratic process.
This paper analyzes state-sponsored cyber operations by the People’s Republic of China (PRC) against the global maritime sector from 2015–2025. It moves beyond isolated technical analysis to frame these campaigns as a coherent strategic logic. Using a structured, focused comparison of three PRC-linked intrusion sets—Volt Typhoon, APT40, and Mustang Panda—this analysis assesses their operational characteristics against prominent cyber strategy theories, including capability-intensity barriers, the intelligence-contest logic, and persistent engagement. The findings demonstrate a consistent pattern of behavior across all three cases: operations are capability-intensive, espionage-forward, and prioritize secrecy over overt signaling. This contrasts with other state actors who have used disruptive signaling in the maritime domain. We argue this pattern is explained by Smeets’ capability-scarcity logic: high-capability maritime accesses are too costly to expend on peacetime signaling. This behavior aligns with PRC doctrinal concepts of "informationized warfare" which prize system-mapping and pre-positioning. The paper concludes by reframing this activity not as "cyberwar" but as a form of "new naval warfare"—a persistent, below-threshold competition for control over the core components of seapower.
The COVID-19 pandemic has been the catalyser of one of the most prolific waves of disinformation and hate speech on social media. Amid an infodemic, special interest groups, such as the international movement of “Doctors for the Truth”, grew in influence on social media, while leveraging their status as healthcare professionals and creating true echo chambers of COVID-19 false information and misbeliefs, supported by large communities of eager followers all around the world. In this paper, we analyse the discourse of the Portuguese community on Facebook, employing computer-assisted qualitative data analysis. A dataset of 2542 textual and multimedia interactions was extracted from the community and submitted to deductive and inductive coding supported by existing theoretical models. Our investigation revealed the high frequency of negative emotions, of toxic and hateful speech, as well as the widespread diffusion of COVID-19 misbeliefs, 32 of which are of particular relevance in the national context.
This article is a critical venture into examining the dissemination of misinformation using social media in Pakistan. This study is focused on the responses of social media users in Pakistan to the plethora of fake news and an attempt to unearth latent causes shaping their attitudes. Based on a mixed-methods mechanism, the research design is comprised of a quantitative survey methodology entailing one hundred and fifty users actively operating social media platforms belonging to the universities in Pakistan and is supported by qualitative comprehensive interviews of thirty active social media enthusiasts. The study is intended to delve into the diffusion of fake news and its effects on internet users and the online community in Pakistan. The patterns and trends of the spread of disinformation, factors responsible for generating responses and shaping behaviours, and the stratagems opted by the social media operators in Pakistan have been detected. The study highlights the significance of calculated and timely interventions to ward off disinformation, urging to step up media literacy among the consumers of social media in Pakistan.
No abstract available
Abstract Over 5 billion people now use social media platforms. As our social lives become increasingly entangled with online social networks, it is important to understand the dynamics of online information diffusion. This is particularly true for the political domain, as political elites, disinformation profiteers, and social activists all use social media to gain influence by spreading information. Recent work found that emotional expressions related to morality (moral-emotion expression) are associated with increased diffusion of political messages—a phenomenon we called “moral contagion.” Here, we perform a large, pre-registered direct replication (N = 849,266) of Brady et al. using the dictionary methods from the original paper, as well as new large-language models. We also conduct a meta-analysis of all available data testing moral contagion (5 labs, 27 studies, N = 4,821,006). The estimate of moral contagion in the available population is positive and significant (IRR = 1.13, 95% CI = [1.06, 1.20]), such that for each additional moral–emotional word in a post, the expected number of shares was 13% greater. The mean effect size of the pre-registered replication (IRR = 1.17) better estimated the population effect than the original study (IRR = 1.20). Contrary to prior work, we find that the moral contagion model substantially outperforms nonsense models of diffusion (“XYZ contagion model”). Moral contagion was also conceptually replicated when moral–emotional content was measured using state-of-the-art natural language processing methods. These findings reveal that the moral contagion effect is highly robust across datasets and methods.
The framing of events in various media and discourse spaces is crucial in the era of misinformation and polarization. Many studies, however, are limited to specific media or networks, disregarding the importance of cross-platform diffusion. This study overcomes that limitation by conducting a multi-platform framing analysis on Twitter, YouTube, and traditional media analyzing the 2019 Koran burning in Kristiansand, Norway. It examines media and policy frames and uncovers network connections through shared URLs. The findings show that online news emphasizes the incident's legality, while social media focuses on its morality, with harsh hate speech prevalent in YouTube comments. Additionally, YouTube is identified as the most self-contained community, whereas Twitter is the most open to external inputs.
Our social media newsfeeds are filled with a variety of content all battling for our limited attention. Across 3 studies, we investigated whether moral and emotional content captures our attention more than other content and if this may help explain why this content is more likely to go viral online. Using a combination of controlled lab experiments and nearly 50,000 political tweets, we found that moral and emotional content are prioritized in early visual attention more than neutral content, and that such attentional capture is associated with increased retweets during political conversations online. Furthermore, we found that the differences in attentional capture among moral and emotional stimuli could not be fully explained by differences in arousal. These studies suggest that attentional capture is 1 basic psychological process that helps explain the increased diffusion of moral and emotional content during political discourse on social media, and shed light on ways in which political leaders, disinformation profiteers, marketers, and activist organizations can spread moralized content by capitalizing on natural tendencies of our perceptual systems. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
The growing interconnection of technology and politics and the enactment of particular political goals (technopolitics) has been closely articulated with emotions and the building of foreign policy narratives. In the current context of change in the communication paradigm, global and disintermediated, bringing together in the same digital space distinct actors, and having wide diffusion and reach, the challenges to international politics are diverse. Digital and media literacy are, in this regard, key to address the implications of these changes, avoiding the spreading of disinformation, fake news and distorted practices that might have profound effects at societal and political level. In this context, this paper aims at providing a basis for understanding the emerging and increasingly clear connection between political communication, polarization, disinformation, and emotions in social networks and digital literacy as a central factor explaining misuse or alleviating deficiencies, on the one hand, and how this context is affecting the reconfiguration of international relations and politics, on the other hand. The case of the war in Ukraine is illustrative of these trends and dynamics.
A survey on the determinants to using political memes as a journalistic tool by Filipino journalists
Memes have successfully disseminated various information on social media, albeit in a humorous tone. Journalism and journalists, however, remain uncertain in using memes as part of news work. Previous studies have revealed that variables related to journalism such as news values, participatory culture, public opinion, disinformation and credibility may be relevant in decisions to use memes in journalistic work. This survey from the Philippines employed partial least squares–structural equation modelling (PLS-SEM) to determine the factors that Filipino journalists (N = 138) consider in using political memes as a journalistic tool. This study is theoretically anchored on the theory of planned behaviour and the multilevel model of meme diffusion. It was found that the variables public opinion, news values, participatory culture and disinformation indirectly affect the production of political memes through mediation by intention. However, credibility was found to be insignificant. As well, results show that intention has a direct effect on the production of political memes. These results indicate that regardless of the degree of the variables’ existence, journalists still carry some intentions to produce political memes. Study results can provide reflections should journalists and their news organizations employ memes as a tool for credible news production, not as tools for disinformation.
Analysts of social media differ in their emphasis on the effects of message content versus social network structure. The balance of these factors may change substantially across time. When a major event occurs, initial independent reactions may give way to more social diffusion of interpretations of the event among different communities, including those committed to disinformation. Here, we explore these dynamics through a case study analysis of the Russian-language Twitter content emerging from Belarus before and after its presidential election of August 9, 2020. From these Russian-language tweets, we extracted a set of topics that characterize the social media data and construct networks to represent the sharing of these topics before and after the election. The case study in Belarus reveals how misinformation can be re-invigorated in discourse through the novelty of a major event. More generally, it suggests how audience networks can shift from influentials dispensing information before an event to a de-centralized sharing of information after it.
This study examines the use of TikTok as a platform for youth activism in Southeast Asia, focusing on Indonesia, Malaysia, and the Philippines. Through a mixed-methods approach that includes qualitative content analysis, semi-structured interviews, systematic fact-checking, and digital trace data, the research explores how activists leverage TikTok’s unique features to promote social justice and mobilize political participation amid democratic backslidings. The findings reveal that activists skillfully adapt to TikTok’s attention economy, using strategies like trend-jacking and meme creation to reach broad audiences. However, they also face significant challenges, including rampant harassment and the precarious nature of algorithmic visibility. The study highlights the complexities of digital activism, where the pursuit of virality can sometimes undermine the depth of political movements. While the research counters the prevalent narrative that digital activism is rife with misinformation, it underscores the importance of maintaining accuracy to protect credibility. The study concludes by calling for further research on integrating online activism with offline organizing and exploring how platform governance can be reformed to better support activists. The insights from this study contribute to a deeper understanding of TikTok’s role in contemporary social movements and the challenges faced by youth activists in the digital age.
No abstract available
We use crowd-sourced assessments from X’s Community Notes program to examine whether there are partisan differences in the sharing of misleading information. Unlike previous studies, misleadingness here is determined by agreement across a diverse community of platform users, rather than by fact-checkers. We find that 2.3 times more posts by Republicans are flagged as misleading compared to posts by Democrats. These results are not base rate artifacts, as we find no meaningful overrepresentation of Republicans among X users. Our findings provide strong evidence of a partisan asymmetry in misinformation sharing which cannot be attributed to political bias on the part of raters, and indicate that Republicans will be sanctioned more than Democrats even if platforms transition from professional fact-checking to Community Notes.
We surveyed 1,000 U.S. adults to understand concerns about the use of artificial intelligence (AI) during the 2024 U.S. presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation. Our findings suggest that direct interactions with AI tools like ChatGPT and DALL-E were not correlated with these concerns, regardless of education or STEM work experience. Instead, news consumption, particularly through television, appeared more closely linked to heightened concerns. These results point to the potential influence of news media and the importance of exploring AI literacy and balanced reporting.
Exposure to misinformation can affect citizens’ beliefs, political preferences, and compliance with government policies. However, little is known about how to durably reduce susceptibility to misinformation, particularly in the Global South. We evaluate an intervention in South Africa that encouraged individuals to consume biweekly fact-checks—as text messages or podcasts—via WhatsApp for six months. Sustained exposure to these fact-checks induced substantial internalization of fact-checked content, while increasing participants’ ability to discern new political and health misinformation upon exposure—especially when fact-check consumption was financially incentivized. Fact-checks that could be quickly consumed via short text messages or via podcasts with empathetic content were most effective. We find limited effects on news consumption choices or verification behavior, but still observe changes in political attitudes and COVID-19-related behaviors. These results demonstrate that sustained exposure to fact-checks can inoculate citizens against future misinformation, but highlight the difficulty of inducing broader behavioral changes relating to media usage.
Current interventions to combat misinformation, including fact-checking, media literacy tips and media coverage of misinformation, may have unintended consequences for democracy. We propose that these interventions may increase scepticism towards all information, including accurate information. Across three online survey experiments in three diverse countries (the United States, Poland and Hong Kong; total n = 6,127), we tested the negative spillover effects of existing strategies and compared them with three alternative interventions against misinformation. We examined how exposure to fact-checking, media literacy tips and media coverage of misinformation affects individuals’ perception of both factual and false information, as well as their trust in key democratic institutions. Our results show that while all interventions successfully reduce belief in false information, they also negatively impact the credibility of factual information. This highlights the need for further improved strategies that minimize the harms and maximize the benefits of interventions against misinformation. This study reveals that current interventions against misinformation erode belief in accurate information. The authors argue that future strategies should shift their focus from only fighting falsehoods to also nurturing trust in reliable news.
No abstract available
No abstract available
Researchers need reliable and valid tools to identify cases of untrustworthy information when studying the spread of misinformation on digital platforms. A common approach is to assess the trustworthiness of sources rather than individual pieces of content. One of the most widely used and comprehensive databases for source trustworthiness ratings is provided by NewsGuard. Since creating the database in 2019, NewsGuard has continually added new sources and reassessed existing ones. While NewsGuard initially focused only on the US, the database has expanded to include sources from other countries. In addition to trustworthiness ratings, the NewsGuard database contains various contextual assessments of the sources, which are less often used in contemporary research on misinformation. In this work, we provide an analysis of the content of the NewsGuard database, focusing on the temporal stability and completeness of its ratings across countries, as well as the usefulness of information on political orientation and topics for misinformation studies. We find that trustworthiness ratings and source coverage have remained relatively stable since 2022, particularly for the US, France, Italy, Germany, and Canada, with US-based sources consistently scoring lower than those from other countries. Additional information on the political orientation and topics covered by sources is comprehensive and provides valuable assets for characterizing sources beyond trustworthiness. By evaluating the database over time and across countries, we identify potential pitfalls that compromise the validity of using NewsGuard as a tool for quantifying untrustworthy information, particularly if dichotomous "trustworthy"/"untrustworthy" labels are used. Lastly, we provide recommendations for digital media research on how to avoid these pitfalls and discuss appropriate use cases for the NewsGuard database and source-level approaches in general.
The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health concerns arose out of social media misinformation during the COVID-19 pandemic. Against the backdrop of the emerging IS research focus on social media and the impact of misinformation during recent events such as the COVID-19, Australian Bushfire, and the USA elections, we identified disaster, health, and politics as specific domains for a research review on social media misinformation. Following a systematic review process, we chose 28 articles, relevant to the three themes, for synthesis. We discuss the characteristics of misinformation in the three domains, the methodologies that have been used by researchers, and the theories used to study misinformation. We adapt an Antecedents-Misinformation-Outcomes (AMIO) framework for integrating key concepts from prior studies. Based on the AMIO framework, we further discuss the inter-relationships of concepts and the strategies to control the spread of misinformation on social media. Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.
Misinformation spreads rapidly on social media, whereas traditional countermeasures struggle to balance effectiveness, scalability, and free expression. Many platforms are now experimenting with crowdsourced fact-checking—systems that rely on users’ collective judgment to identify and annotate misleading content. This paper investigates the efficacy of such systems in curbing misinformation in the context of Community Notes, a pioneering crowdsourced fact-checking system from Twitter/X. Using a regression discontinuity design, we find that publicly displaying community notes significantly increases and accelerates the voluntary retraction of misleading tweets, demonstrating the viability of crowd-based fact-checking as an alternative to professional fact-checking and forcible content removal. The effect is primarily driven by authors’ reputational concerns and social pressure when corrections are visible to the public. Our findings carry meaningful implications for practice and policy. Individuals can play an active role by contributing to crowdchecking, strengthening collective information integrity. Platforms should adopt transparent, community-based systems, like Community Notes, as scalable, less controversial alternatives to forcible content removal. Policymakers can support these initiatives through regulatory guidance that promotes transparency and accountability.
In response to intense pressure, technology companies have enacted policies to combat misinformation1–4. The enforcement of these policies has, however, led to technology companies being regularly accused of political bias5–7. We argue that differential sharing of misinformation by people identifying with different political groups8–15 could lead to political asymmetries in enforcement, even by unbiased policies. We first analysed 9,000 politically active Twitter users during the US 2020 presidential election. Although users estimated to be pro-Trump/conservative were indeed substantially more likely to be suspended than those estimated to be pro-Biden/liberal, users who were pro-Trump/conservative also shared far more links to various sets of low-quality news sites—even when news quality was determined by politically balanced groups of laypeople, or groups of only Republican laypeople—and had higher estimated likelihoods of being bots. We find similar associations between stated or inferred conservatism and low-quality news sharing (on the basis of both expert and politically balanced layperson ratings) in 7 other datasets of sharing from Twitter, Facebook and survey experiments, spanning 2016 to 2023 and including data from 16 different countries. Thus, even under politically neutral anti-misinformation policies, political asymmetries in enforcement should be expected. Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation policies. We find that conservatives tend to share more low-quality news through social media than liberals, and so even if technology companies enact politically neutral anti-misinformation policies, political asymmetries in enforcement should be expected.
Misinformation such as fake news and rumors is a serious threat for information ecosystems and public trust. The emergence of large language models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double‐edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emerging question is: can we utilize LLMs to combat misinformation? On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: how to combat LLM‐generated misinformation? In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions, respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM‐generated misinformation.
In an era of pervasive misinformation, equipping citizens to counter its spread is increasingly critical. This study examines news authentication—individuals’ proactive verification of news—as a key indicator of resilience to misinformation. Guided by the theory of planned behavior and the resilience model, we examine how individual characteristics and structural contexts interact to influence news authentication. To do so, we adopt a multilevel comparative approach, analyzing news authentication in three distinct societies: Hong Kong, the Netherlands, and the United States. Drawing on a preregistered, population-based survey conducted in 2022 (N = 6,082), we apply multigroup structural equation modeling to identify the influential factors. Our findings show that, at the societal level, news authentication is more prevalent in the United States and Hong Kong, where severe polarization and fragmented, low-trust media environments amplify misinformation risks. Conversely, the Netherlands exhibits lower levels of news authentication, potentially due to its relatively cohesive media environment and moderate polarization. At the individual level, political efficacy and institutional trust are consistent predictors across societies, underscoring the importance of political empowerment and trust in fostering resilience. Education significantly predicts news authentication only in the United States, where the complex information landscape necessitates higher cognitive engagement. Notably, conspiracy beliefs positively associate with news authentication in the Netherlands and the United States, reflecting a potential “dark side” of this behavior in contexts marked by growing anti-establishment sentiments. These findings highlight the interplay between individual capacities, political beliefs, and broader media and political environments in shaping resilience to misinformation.
The proliferation of misinformation in the digital age has emerged as a pervasive and pressing challenge, threatening the integrity of information dissemination across online platforms. In response to this growing concern, this survey paper offers a comprehensive analysis of the landscape of misinformation detection methodologies. Our survey delves into the intricacies of model architectures, feature engineering, and data sources, providing insights into the strengths and limitations of each approach. Despite significant advancements in misinformation detection, this survey identifies persistent challenges. The paper accentuates the need for adaptive models that can effectively tackle rapidly evolving events, such as the COVID-19 pandemic. Language adaptability remains another substantial frontier, particularly in the context of low-resource languages like Chinese. Furthermore, it draws attention to the dearth of balanced, multilingual datasets, emphasizing their significance for robust model training and assessment. By addressing emerging challenges and offering a comprehensive view, our paper enriches the understanding of deep learning techniques in misinformation detection.
ABSTRACT As distrust in mainstream media rises, audiences increasingly turn toward alternative news sources. This study examines the impact of alternative non-mainstream podcast news use on contentious political participation through misinformation belief and sharing. Findings from a sample of US adults (N = 797) indicate that alternative non-mainstream podcast news use is significantly associated with misinformation belief and sharing. In addition, alternative non-mainstream podcast news use is related to an increase in contentious political participation through misinformation sharing only. Ultimately, we find that political identity strength moderates the relationship between alternative non-mainstream podcast news use and contentious political participation only through misinformation belief.
No abstract available
We explore the link between social media news consumption and belief in misinformation about women politicians in India. In addition, we investigate the roles of sexism, with cognitive ability (individual factor) and gender inequality status (of the state where respondents reside) as structural-level moderating factors. Results indicate a positive association between social media news use and belief in misinformation, mediated by hostile and benevolent sexism. Furthermore, we find that low-cognitive individuals in states with high structural gender inequality are most vulnerable to misinformation. The results emphasize the need to create more gender equality structurally, to reduce susceptibility to gendered misinformation.
The spread of misinformation threatens democratic societies, hampering informed decision-making. Partisan identity biases perceptions of reality, promoting false beliefs. The Identity-based Model of Political Belief explains how social identity shapes information processing and contributes to misinformation. According to this model, social identity goals can override accuracy goals, leading to belief alignment with party members rather than facts. We propose an extended version of this model that incorporates the role of informational context in misinformation belief and sharing. Partisanship involves cognitive and motivational aspects that shape party members' beliefs and actions. This includes whether they seek further evidence, where they seek that evidence, and which sources they trust. Understanding the interplay between social identity and accuracy is crucial in addressing misinformation.
Global health leaders often dismiss politics as antithetical to the aims of public health, but Luisa Enria and colleagues argue that political analysis can offer new ways to build trust in vaccination in the context of growing online misinformation
Recent academic debate has seen the emergence of the claim that misinformation is not a significant societal problem. We argue that the arguments used to support this minimizing position are flawed, particularly if interpreted (e.g., by policymakers or the public) as suggesting that misinformation can be safely ignored. Here, we rebut the two main claims, namely that misinformation is not of substantive concern (a) due to its low incidence and (b) because it has no causal influence on notable political or behavioral outcomes. Through a critical review of the current literature, we demonstrate that (a) the prevalence of misinformation is nonnegligible if reasonably inclusive definitions are applied and that (b) misinformation has causal impacts on important beliefs and behaviors. Both scholars and policymakers should therefore continue to take misinformation seriously. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
No abstract available
Significance Many people consume news via social media. It is therefore desirable to reduce social media users’ exposure to low-quality news content. One possible intervention is for social media ranking algorithms to show relatively less content from sources that users deem to be untrustworthy. But are laypeople’s judgments reliable indicators of quality, or are they corrupted by either partisan bias or lack of information? Perhaps surprisingly, we find that laypeople—on average—are quite good at distinguishing between lower- and higher-quality sources. These results indicate that incorporating the trust ratings of laypeople into social media ranking algorithms may prove an effective intervention against misinformation, fake news, and news content with heavy political bias. Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.
The COVID-19 pandemic triggered not only a public health crisis but also a parallel “infodemic”—an overwhelming flood of information, including false or misleading content. This phenomenon created confusion, mistrust, and hindered public health efforts globally. Understanding the dynamics of this infodemic is essential for improving future crisis communication and misinformation management.This systematic review followed PRISMA 2020 guidelines. A comprehensive search was conducted across PubMed, Scopus, Web of Science, and Google Scholar for studies published between December 2019 and December 2024. Studies were included based on predefined criteria focusing on COVID-19-related misinformation causes, spread, impacts, and mitigation strategies. Data were extracted, thematically coded, and synthesized. The quality of studies was assessed using the AMSTAR 2 tool.Seventy-six eligible studies were analyzed. Key themes identified included the amplification of misinformation via digital platforms, especially social media; psychological drivers such as cognitive biases and emotional appeals; and the role of echo chambers in sustaining false narratives. Consequences included reduced adherence to public health measures, increased vaccine hesitancy, and erosion of trust in healthcare systems. Interventions like fact-checking, digital literacy programs, AI-based moderation, and trusted messengers showed varied effectiveness, with cultural and contextual factors influencing outcomes.The review highlights that no single strategy suffices to address misinformation. Effective mitigation requires a multi layered approach involving reactive (fact-checking), proactive (digital literacy, community engagement), and structural (policy and algorithm transparency) interventions. The review also underscores the importance of interdisciplinary collaboration and adaptive policies tailored to specific sociocultural settings.
Misinformation is widespread, but only some people accept the false information they encounter. This raises two questions: Who falls for misinformation, and why do they fall for misinformation? To address these questions, two studies investigated associations between 15 individual-difference dimensions and judgments of misinformation as true. Using Signal Detection Theory, the studies further investigated whether the obtained associations are driven by individual differences in truth sensitivity, acceptance threshold, or myside bias. For both political misinformation (Study 1) and misinformation about COVID-19 vaccines (Study 2), truth sensitivity was positively associated with cognitive reflection and actively open-minded thinking, and negatively associated with bullshit receptivity and conspiracy mentality. Although acceptance threshold and myside bias explained considerable variance in judgments of misinformation as true, neither showed robust associations with the measured individual-difference dimensions. The findings provide deeper insights into individual differences in misinformation susceptibility and uncover critical gaps in their scientific understanding.
What role does propaganda play in the information politics of authoritarian societies, and what is its relationship to censorship? What have we learned from rival accounts in recent literature about why states produce it? While regimes clearly invest in propaganda believing that it is effective, there is still much to learn about whether, when, and how it actually is effective. We first discuss some of the tensions inherent in distinguishing between persuasive and dominating, soft and hard, propaganda. We then review efforts to understand the conditions under which propaganda changes attitudes and/or behavior in terms of propaganda's content, relational factors, aspects of the political environment, and citizens’ own predispositions. We highlight the need for more research on propaganda in authoritarian settings, especially on how patterns of its consumption may change amid crises, technological shifts, and direct state interventions. Expected final online publication date for the Annual Review of Political Science, Volume 27 is June 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Since the 2016 U.S. election and the U.K. Brexit campaign, computational propaganda has become an important research topic in communication, political and social science. Recently, it has become clearer that computational propaganda doesn’t start from a clean slate and is not precisely bound to single issues or campaigns. Instead, computational propaganda needs to be looked at as a complex phenomenon in a global environment of co-evolving issues and events, emerging technologies, policies and legal frameworks, and social dynamics. Here, we review the literature on computational propaganda from this perspective and theorize this evolving and longitudinal nature of computational propaganda campaigns through the lens of relational dynamics. Our conceptual contribution forms the basis for a new kind of empirical research on computational propaganda that is aware of the complex interdependencies, feedback cycles and structural conditions that are elusive when focusing on individual campaigns and short time frames.
This paper delves into the critical and evolving challenge posed by terrorist organizations' adaptation to cyber technologies, as the proliferation of these technologies significantly impacts societal and security dynamics globally. The paper highlights the case of ISIS as a prime example, illustrating the group's sophisticated use of cyberspace for purposes ranging from global recruitment to attack planning, thereby demonstrating the complexity and reach of modern cyberterrorism. Aiming to investigate the adaptation of terrorist groups to cyber technologies, the study primarily focuses on methods used for recruitment, propaganda, and execution of cyberattacks. The research employs a quantitative methodology, relying on a survey strategy to gather data, and it significantly engages with consultants and policy specialists in counter-terrorism, alongside cybersecurity experts. The findings reveal a substantial impact of digital platforms on the global reach and influence of terrorist groups, the increasing sophistication of cyberattacks, and the extensive socio-economic repercussions of digital-age terrorism. The study culminates in offering insightful recommendations, urging a multifaceted response integrating technological, social, and international measures. It emphasizes enhancing digital literacy and public awareness to combat the influence of extremist narratives and misinformation. The necessity of international cooperation and intelligence sharing is underscored, highlighting the global nature of the threat and the need for unified standards in regulating digital spaces. Additionally, the paper advocates for stringent regulatory measures and advanced detection technologies to counter the misuse of drones and 3D-printed weapons, pointing to the necessity of collaborative efforts across various sectors to strike a balance between security and innovation.
Is the hype about “ecoterrorism” analogy, warning or propaganda? In order to answer this question, we start by defining radicalization, terrorism, and civil disobedience to develop systematic categories which allow us to pursue two specific research goals: First, we analyse how the breadth of the German climate movement is represented in the media, how the issue of “terrorism” is taken up and with what consequences for the debate. Here we make a discursive argument. Secondly, we use the information provided by the media reports, triangulate it with primary data from the movements analysed and secondary data from academic publications in order to assess the validity of the accusation of terrorism. Here we make a factual argument about the current properties of the climate movement. Finally, we bring both arguments together and argue that even the more radical currents of climate activism should not be classified as terrorists. What we can see is that there has been an attempt to criminalize demands of the radical climate movement during which large parts of the German print media have become willing handmaidens in the delegitimization of more or less radical climate groups. More recently, very first signs of a backlash against the criminalization can be detected.
This study conceptualizes “banana populism,” a novel analytical framework to examine how whimsical imagery functions in in contemporary populism. Banana populism utilizes the ordinary—exemplified by the banana—for its ubiquity, inherent humor, and absurdity, transforming these elements into powerful political tools. These articulations effectively mainstream extreme ideologies, invite affective investment from broad publics, and delineate antagonistic frontiers by employing familiar cultural symbols and everyday objects, such as military attire or MAGA hats. Such performative elements not only enhance the authenticity of populist leaders but also make their messages more accessible and emotionally engaging, increasing their appeal and relatability. Furthermore, the memetic nature of banana populism underlines its adaptability and potency on social media, where these performances become part of a participatory and dynamic political discourse. This framework shows how seemingly innocuous visual articulations can profoundly impact political communication and identity formation in contemporary political landscapes.
This paper contributes to the literature on the style of populists by focusing on the visual and textual elements of Viktor Orbán’s Facebook communication. Orbán is one of the most prominent figures associated with contemporary populism, and his 14 consecutive years in power make him a unique case for the study of the bimodal populist style. To this end, all his image-based posts (N = 492) were collected over a three-year period (2018-2020), covering campaigns, the COVID-19 crisis, and slow news (‘cucumber’) periods. The results of the quantitative visual and verbal content analyses reveal the primacy of visual content in transmitting populist signals, suggesting that Orbán’s relationship with ‘the elite’ is predominantly positive, contrary to expectations about negative populist communication about elites. Although the results indicate only moderate differences in the use of populist style elements across the three time periods, the findings suggest that visual elements are used in populist communication to convey different messages than textual ones.
The parody of political figures is standard in democratic countries, including Indonesia. However, in Indonesia, some cases of political parodies have faced intense backlash, even leading to legal action. In contrast, the parody of Indonesia's fourth president, Abdurrahman Wahid (Gus Dur), created by the GUSDURian Network, is often presented uniquely and eccentrically. This article provides recommendations for strategic communication practitioners, mainly social media activists, to disseminate the thoughts of prominent Indonesian figures through visual political parodies, with a case study on Gus Dur parody content produced by the GUSDURian Network. Based on in-depth interviews with GUSDURian staff and guided by communication strategy indicators from Onong Uchjana Effendy, this study identifies key strategies for creating accessible, humorous, and responsible parody content. These strategies include targeting Millennials and Generation Z (MZ) fans of anime pop culture, utilizing visually engaging social media platforms, positioning parody content as an initial engagement tool (bait), aligning content with current issues, and verifying content with experts before publication.
No abstract available
Even though populism is arguably one of the most researched topics in contemporary political science, the study of its communication is disproportionately focused on its verbal and written dimensions. In recent years, an increasing number of studies have explored its visual dimension, highlighting its importance in the process of meaning-construction and the interaction between political actors and citizens. In this state-of-the art review, we discuss the importance of analysing visual communication and how it relates to the main approaches to the study of populism. Then, we outline the main works conducted on the visual politics of populism and suggest some potential directions for future research. The review reveals that existing research has primarily focused on the content of visual populism, highlighting, in particular, the role of images in constructing ‘the people’, ‘the enemy’ and in depicting populist leaders as ordinary yet exceptional figures. More research is needed on how images are produced and who receives and interpret them. Beyond content analysis, future research should adopt a broader range of methodological tools to fully explore populist visual communication.
The purpose of the article is to reveal the features of the development of visual communication design in China at the end of the 20th century in the context of the policy of openness and active interaction with international professional communities. Research methodology. To achieve the goal, general scientific and special research methods were used. Complex and analytical methods were effective, which made it possible to conduct research in various aspects holistically and consistently. The main results of the study. China experienced a difficult period of political and social transformations, which significantly influenced the content and image in the design of visual communications during the 20th century. It was the changes in the political arena in the country that allowed visual communication design in the 1980s and 1990s to gain new development and establish itself as an important professional field in the last three decades. International communication, economic growth, technological development, inspired by the policy of reforms and openness, activated all modern directions of graphic design, contributed to the formation of industry associations and institutions. It was found that the openness of Chinese designers to Western practices in the context of globalization creates conditions for direct borrowing of universally accepted approaches and solutions, reducing the value of their own traditions. Therefore, the intention to fill the visual language of design with national local content becomes noticeable. Conclusions. It was revealed that the intensification of the development of visual communication design in China at the end of the 20th century, inspired by changes in the political arena in the conditions of reforms and openness, contributed to the expression of the industry at the national and international levels, strengthening its status and influence. From the early 1980s to today, the gradual standardization of graphic design at the level of academic organization, scientific discourse and other disciplines has accumulated professional reserves for a new stage and created an important platform for further development.
Political communication, especially at the campaign level of candidates for the Regional Representative Council (DPD) like Komeng in West Java, utilizes visual media strategies such as ballot papers as significant campaign tools. This research adopts a quantitative approach with primary data gathered through interviews and documentation from key informants, including members of the KPU (General Election Commission), Bawaslu (Election Supervisory Agency), TKN (Campaign Team), and the general public. Simple random sampling method was employed to collect data, which was then analyzed using straightforward data processing techniques. The research findings indicate that the Political Communication conducted by Komeng has been able to create a political entertainment stage amidst intense political competition. This concept of political entertainment, highlighting cheerfulness amidst political tension, has resonated strongly with voters emotionally. This strategy successfully garnered public sympathy and increased Komeng's presence in the political arena. Visual communication leveraging Komeng's humorous character provided a positive affective stimulus to voters, thereby fostering positive interactions
No abstract available
This article is an invitation to engage with the small ‘p’ politics of visual political communication by highlighting the importance of both culture and history, in order to gain greater understanding of how images and the visual more broadly may ‘work’ on us and contribute to our imaginaries as well as our understanding of political messages and political life as a whole. Specifically, the article aims to encourage scholars in this field to engage less with strategy and tactics or persuasion and effects to delve more deeply into why and how visual meanings become politically powerful over time and in particular contexts. In doing so, the article foregrounds the work of two major scholars of the visual, Stuart Hall and Michel Pastoureau, and promotes an approach focusing on the more seemingly mundane, taken-for-granted and everyday meanings and practices underlying visual political communication. To demonstrate this approach, the article offers an in-depth discussion of the photograph used in the ‘Breaking Point’ poster at the centre of the political campaign which was launched by UKIP leader Nigel Farage in the run-up to the 2016 Brexit referendum.
This study provides an understanding of how headloading practice signifies subaltern voices in Nigeria and Africa, among others. The study has sought to interpret the practice of headloading drawing from its images and representations from selected sources, which include Facebook and catalogues of artwork. This interpretive study constructs meaning through a discursive insight around the social practice of headloading. It expands interpretation and ideological structures to include social, economic, and political applications of the headloading subject within the framework of visual discourse and metaphor. This study directs a rethinking of headloading while underpinning the notion of ‘subalternity.’ While not necessarily asking for abolishing the social practice of headloading, the study communicates an understanding that the social phenomenon is a constant symbol of class and power status that many Africans have experienced. Headloading further ‘metaphorises’ the interaction between developed and developing societies (the Global North and South), and underscores related imports of colonisation.
Abstract: Discussions and debates on Euro-Atlanticism are taking place in Bulgarian society and in particular in the social network Facebook, and they are being realized on a verbal and visual level. Virtual communication has many manifestations and some of them are posters, memes, parodies and paraphrases; others are copies of slogans and posters from rallies and demonstrations, which are perceived as part of the manifestations of different opinions in civil society in Bulgaria. The article investigates these particular manifestations through a rhetorical analysis that is an adapted version of established methods. The corpus was collected over a one-year period from March 2022 to March 2023 and is heterogeneous in nature precisely because of the specificity of the objects: verbal messages, visual images, election posters, political platforms, media statements, etc. The assumption is that there is no unanimous opinion in Bulgarian society about Euro-Atlanticism, the assumption being that there is skepticism about Bulgaria's EU and NATO memberships in some groups in society. The rhetorical analysis, focused on the derivation of the messages at the verbal and visual level, their effects in a specific political context and in a pre-electoral campaign and situation, provides opportunities to identify some manifestations of Euroscepticism in a virtual environment. The analysis does not claim to be exhaustive, but to present results based on a study in a virtual environment in Bulgaria of the differences on Euroscepticism in different political parties in Bulgaria. Keywords: rhetoric, rhetorical analysis, virtual visual communication, Euroscepticism, Facebook.
This paper addresses how global climate movements use images in their social media communication from a comparative perspective: How have Fridays For Future and Extinction Rebellion in Italy, Germany, Sweden and Hungary evolved their use of visual communication on Instagram between 2018 and 2024? We argue that three lines of analysis are important for a comprehensive understanding of the relation between image content and protest movements: a) complementary movements, b) complementary countries and c) longitudinal observation. We explore those lines of analysis by leveraging a mixed-method analysis of full production of images shared by FFF and XR on Instagram in the period 2018-2024.
Abstract This paper explores how images are used in online far‐right political communication to create distinct groups of “otherness.” Focusing on the Danish People's Party, we look at how symbolic boundaries are constructed through images to emphasize an exclusive conception of the nation and its citizens, who need protection from the threatening “others.” In order to understand the global rise of the far right, scholars of social movements and digital media have called for new research on how visual images serve the mainstreaming of extremist and nationalist beliefs online. We look at images communicated by the Danish People's Party on their Facebook page, exploring how digital images visually communicate the party's slogan of “Safety and trust” (in Danish: “Tryghed og tillid”). With a focus on boundary construction, we present a multimodal visual analysis of 1120 images posted by the party from 2012 to 2020. The data shows how the party constructs an imaginary of Danishness through an exclusionary impermeable boundary construction of a trusted in‐group's values and traditions in opposition to culturally distinct “others.”
Social media often follow a visual logic found to increase engagement, as images are more likely to attract attention, presenting information on a holistic-associative basis. For a political entity like the EU, social media are a promising route to overcome the remoteness to its citizens, identified as one of the crucial challenges to its public legitimacy. Against this broader background, our study analyses the influence of 10 years of EU visual social media communication on user engagement as an indicator of successfully creating visibility in a crucial communication space. For this purpose, we conducted an image-type analysis, combining quantitative and qualitative features of visual analysis: First, a subsample of posts was inductively analysed to identify recurring image types and subsequently used to implement a manual quantitative visual content analysis. Building on the results, we drew on a machine learning approach, allowing us to analyse over 40,000 posts, including more than 20,000 pictures. Our results emphasise the crucial influence of social media affordances in explaining user engagement with EU visual social media communication. Implications are discussed with reference to the ongoing discussion about the EU’s democratic deficit.
The integration of digital platforms into government welfare scheme communication represents a paradigm shift in public relations and citizen engagement in the Global South, particularly within the Indian context. This systematic review examines the challenges and opportunities in utilizing digital platforms—including social media, e-government portals, and integrated digital infrastructure—for effective public relations of welfare schemes. Drawing on 56 peer-reviewed sources and emerging case studies from 2012 to 2024, this review analyzes the evolution of digital government communication, stakeholder ecosystem dynamics, and comparative platform efficacy. Findings reveal a complex landscape characterized by opportunities in real-time engagement, inclusive reach, and transparency enhancement, countered by persistent challenges in digital divide mitigation, misinformation management, and equitable access. Theoretical frameworks from Mergel (2012), Hussin et al. (2024), and e-Government 2.0 models provide analytical scaffolding. India's MyScheme platform (2.34 crore citizens integrated by October 2024) exemplifies both technological advancement and implementation challenges. The paper advocates for a holistic, stakeholder-centric approach integrating multiple digital channels while addressing accessibility, trust-building, and behavioral change mechanisms. Implications extend to policymakers, communications professionals, and development practitioners navigating digital welfare ecosystems in resource-constrained contexts
The military and civilian positions in Indonesia have strict limits, where military members who enter the realm of government must renounce their military titles. This phenomenon occurred in several figures, including Prabowo Subianto, Andika Perkasa, and Sturman Panjaitan. This study aims to analyze the visual communication strategies employed by the three candidates in constructing their self-presentation and visual framing through Instagram. With a quantitative approach through visual content analysis, this study identifies characterization patterns that emerge as both The Ideal Candidate and The Populist Campaigner. The findings show that Prabowo Subianto, as a candidate for head of state, highlights the character of The Ideal Candidate by creating the impression of a leader who is firm, strong, visionary, and has a broad reach. Andika Perkasa, as a candidate for regional head, displays the character of The Populist Campaigner with a relaxed and simple image, creating the impression of a leader who is close to the people and does not distance the community. Sturman Panjaitan, as a candidate for legislative member, more closely displays the character of the Ideal Candidate by conveying an impression of authority and formality. This research also demonstrates that in visual communication strategies, the role of position influences the construction of self-presentation in relation to different responsibilities, authority, and reach. Thus, visual framing is a crucial tool in the political communication strategy of candidates with a military background in the civilian realm.
Wealth inequality is deepening in many countries around the world, presenting increasing challenges to public notions of fairness while simultaneously proving resistant to democratic intervention. This article looks at one element of the politics of wealth inequality which has so far received relatively little attention: visual representations in political communication. The authors collected an original dataset of 243 images posted on Facebook by UK news media and civil society organizations to explore how different actors visually represent the problem of wealth inequality. They used content analysis to demonstrate that news media in particular tends to visualize inequality through images of wealth itself, such as luxury goods and property, whereas civil society more often tries to contrast richness and poorness. They conducted social semiotic analysis on two sets of recurring tropes to investigate the complex trade-offs in how visual content frames inequality, whether through ambivalent focus on the super-rich or a claim to objectivity and completeness through birds-eye aerial photography.
Alexander Bauer, Statistical Consulting Unit StaBLab, Department of Statistics, LMU Munich, Germany André Klima, Statistical Consulting Unit StaBLab, Department of Statistics, LMU Munich, Germany Jana Gauß, Statistical Consulting Unit StaBLab, Department of Statistics, LMU Munich, Germany Hannah Kümpel, Statistical Consulting Unit StaBLab, Department of Statistics, LMU Munich, Germany Andreas Bender, Statistical Consulting Unit StaBLab, Department of Statistics, LMU Munich, Germany Helmut Küchenhoff, Statistical Consulting Unit StaBLab, Department of Statistics, LMU Munich, Germany
The article presents an interdisciplinary analytical framework contributing to the growing research field of visual political communication, focusing on the case of the social media images published by Italian politicians during the 2024 European elections campaign (May–June 2024). In the first part, the article outlines the context of the analytical framework at the intersection of three main research fields: political communication, in particular the study of electoral campaigns via social media; visual culture and communication, precisely the analysis of the visual representation, self‐representation, and counter‐representation of political leaders; and computer science, in particular the application of machine learning techniques for computer vision to recognize and categorize visual political content. In the second part, the article offers an application of the analytical framework by sharing some empirical results of a quantitative and qualitative analysis of the visual content published by 21 Italian political actors on Facebook and Instagram during the campaign, focusing on their main visual formats, themes, and strategies of representation of political leadership. In the analysis, deep learning models are also employed to detect specific image characteristics by cross‐referencing their outputs with manual cataloguing performed on the same images and for the same attributes. In the end, on the basis of the research carried out, the article suggests possible paths for future interdisciplinary analysis of online visual political communication.
Visual political communication on Instagram: a comparative study of Brazilian presidential elections
In today’s digital age, images have become powerful tools for politicians to engage with their voters on social media platforms. Visual content possesses a unique emotional appeal that often leads to increased user engagement. However, research on visual communication remains relatively limited, particularly in the Global South. This study aims to bridge this gap by employing a combination of computational methods and qualitative approach to investigate the visual communication strategies employed in a dataset of 11,263 Instagram posts by 19 Brazilian presidential candidates in 2018 and 2022 national elections. Through two studies, we observed consistent patterns across these candidates on their use of visual political communication. Notably, we identify a prevalence of celebratory and positively toned images. They also exhibit a strong sense of personalization, portraying candidates connected with their voters on a more emotional level. Our research also uncovers unique contextual nuances specific to the Brazilian political landscape. We note a substantial presence of screenshots from news websites and other social media platforms. Furthermore, text-edited images with portrayals emerge as a prominent feature. In light of these results, we engage in a discussion regarding the implications for the broader field of visual political communication. This article contributes by showing the ways Instagram was used in the digital political strategy of two fiercely polarized Brazilian elections, shedding light on the ever-evolving dynamics of visual political communication in the digital age. Finally, we propose avenues for future research in the field of political communication.
This study examines the role of political posters in shaping public opinion, identity, and civic participation. By combining images and words, posters influence emotions and decision-making, with examples ranging from historical propaganda to modern election campaigns. Using a qualitative descriptive approach supported by case studies and a literature review, the research analyzes examples such as Shepard Fairey’s Hope poster from Obama’s 2008 campaign, Bogotá’s 2019 mayoral election, Taiwan’s referendum campaigns, and wartime propaganda. The findings show that design elements such as color, layout, and facial imagery strongly affect audience attention and memory, while slogans and short text reinforce first impressions. Cultural and social contexts further shape how these messages are received, with case studies revealing that posters not only attract attention but also build group identity and motivate participation. In the digital era, posters remain persuasive tools that circulate rapidly through social media, raising both opportunities for engagement and ethical concerns about manipulation and misinformation. Their continued effectiveness highlights the enduring importance of posters as instruments of persuasion, cultural expression, and civic engagement.
ABSTRACT The growing interest in political leaders’ visual communication often emphasizes specific visual features without focusing on the driving factors behind these strategies. Our study introduces the Visual Opportunity Structure (VOS) theory, aiming to explain the use of specific visual elements based on their suitability within the socio-political context. We examined the COVID-19 pandemic, analyzing a large dataset (N = 73,379) of Instagram posts by 28 European national party leaders coded through automatic facial and emotional recognition algorithms. The findings reveal a negative link between the use of inappropriate visual features during pandemic waves, like depicting happiness and groups of people, and the severity of the pandemic’s impact. Political leaders significantly reduce these inappropriate visuals during severe waves, reintroducing them in calmer periods. This trend is particularly pronounced among government party leaders. Our research not only unveils a pattern in the visual communication tactics used by political figures during the pandemic but also provides deeper insights into how visual strategies align with the broader context. By shedding light on these nuances, the study contributes to a more comprehensive understanding of visual political communication online.
ABSTRACT Politicians often rely on in-group markers of identity, aiming to signal their ideology to specific segments of the electorate. The impact of this practice on the electorate is insufficiently understood. In particular, we know markedly little about the effects of visual cues, a domain of utmost importance on social media. With previous research indicating that some politicians use more visual cues as identity markers than others, we set out to assess whether the effects of visual cues vary based on how much a politician uses them. We used an experiment conducted in Germany (N = 655) to test whether the use of visual cues by a fictitious politician impacted citizens’ attitudes and voting intentions, depending on the interplay between strength of use (i.e. how many liberal or conservative cues a politician uses) and citizens’ political ideology. A balanced sample of participants—one-third each self-categorized as liberal, moderate, or conservative—was randomly allocated to one of 13 conditions denoting various strengths of use, ranging from very liberal to very conservative. Analyses indicated substantial effects of strength of use on attitudes and voting intentions, yet only for liberal participants. Only liberals used the information provided by visual cues when evaluating the politician.
More than ten years ago, Schill’s (2012) review article was published on the visual aspects of political communication aiming to increase research in this field. It seems that scholars have reacted to this call in the last decade. The present article argues that in the last ten years, visual political communication (VPC) has been affected by technological advances, and with the proliferation of the internet and social media, political communication has become even more visual. As Schill’s (2012) article predated this period, a new review seems to be timely. To that end, a combination of a systematic and narrative review is provided to highlight the results and developments in this area. Findings suggest that the rise of social media has brought changes to VPC, which have been reflected in the literature by focusing on key concepts in contemporary political communication: personalization, populism, gender-related issues, and the effects of VPC on citizens, separately on social media and in television.
In today's digital age, images have emerged as powerful tools for politicians to engage with their voters on social media platforms. Visual content possesses a unique emotional appeal that often leads to increased user engagement. However, research on visual communication remains relatively limited, particularly in the Global South. This study aims to bridge this gap by employing a combination of computational methods and qualitative approach to investigate the visual communication strategies employed in a dataset of 11,263 Instagram posts by 19 Brazilian presidential candidates in 2018 and 2022 national elections. Through two studies, we observed consistent patterns across these candidates on their use of visual political communication. Notably, we identify a prevalence of celebratory and positively toned images. They also exhibit a strong sense of personalization, portraying candidates connected with their voters on a more emotional level. Our research also uncovers unique contextual nuances specific to the Brazilian political landscape. We note a substantial presence of screenshots from news websites and other social media platforms. Furthermore, text-edited images with portrayals emerge as a prominent feature. In light of these results, we engage in a discussion regarding the implications for the broader field of visual political communication. This article serves as a testament to the pivotal role that Instagram has played in shaping the narrative of two fiercely polarized Brazilian elections, casting a revealing light on the ever-evolving dynamics of visual political communication in the digital age. Finally, we propose avenues for future research in the realm of visual political communication.
Despite the growing attention to visual political communication (VPC), we still know little about how visuals are produced by populist and non-populist actors. This article addresses this gap through in-depth semi-structured interviews with high-profile members of the communication teams of the major Italian political parties and leaders—that is, the experts shaping their communication strategies. Our exploratory study offers an unprecedented “insider” perspective on the conception, design, and deployment of political visuals, providing new insights into VPC. Notably, the findings challenge common assumptions about the distinctiveness of populist VPC, revealing that hallmark features often associated with populism—such as emotionally charged imagery and specific chromatic and stylistic choices—are frequently adopted by non-populist actors as well. This convergence suggests that social media platforms incentivize all political actors to adopt particular visual strategies. Consequently, it is misleading to consider specific visual elements intrinsically populist; instead, VPC appears primarily as a strategic choice.
This study examines how social media users engage with visual populist content on Instagram and TikTok, specifically focusing on ‘de-demonization’ strategies used by party leader Riikka Purra of the populist radical right-wing Finns Party. The study investigates how these strategies have been received among a young audience who oppose the populist party. By combining eye-tracking data with facial expression analysis, this research offers a novel methodological approach to understanding visual attention and emotional responses. The results reveal insights into the effectiveness of visual populist communication, suggesting that, while visual elements are often emphasized, captions and user-generated comments play a significant role in shaping emotional engagement. These findings underscore the need for a comprehensive approach to analysing social media content, especially in the context of visual populist strategies on platforms like Instagram and TikTok.
Along with the recent boom in support of populist movements in Europe, social media seems to be the ideal place for their interaction with the public. While Facebook has been thoroughly explored for populist campaigning, there is still scarce research on visual aspects of their communication. Analysing the 2019 European Parliament campaign, this study seeks to determine the distinct characteristics of a populist visual communication style and its differences in relation to the non-populist parties. Applying quantitative content analysis to the images (N = 997) posted on Facebook by political parties from 28 countries enabled us to show that there is a predominance of similarities in both communication styles. Although populists demonstrated a higher propensity to depict their leader and use national symbols, these were exceptions to the overwhelming evidence of uniformity in campaigning methods. Hence, we argue that despite evidence of textual visibility, populist communication does not explicitly manifest through images.
What does the growing popularity of audio-visual platforms and vertical video mean for visual political communication? I address the opportunities and challenges of TikTok and related platforms for news media, political actors, citizens and researchers, and briefly discuss possible avenues for future academic work. These include questions related to source credibility and media literacy, the assessment of attention versus exposure, political learning and personalization. I argue that how our field engages with these questions will be decisive in the near future.
ABSTRACT How do specific sociopolitical cultural contexts influence the image-making strategies of heads of state on social media? Through a hybrid visual quantitative and qualitative analysis, this study highlights the ways in which political leaders of two countries with vastly different cultural contexts – Spanish President Pedro Sánchez and Indian Prime Minister Narendra Modi – used the social, political, geographical, and cultural particularities of their countries to present themselves visually on Instagram and appeal to the public. The findings suggest that Sánchez and Modi have leveraged Instagram’s structural and functional properties to stage political performances infused with cultural markers, to spotlight specific facets of their identity attributes and character traits, as well as to roll out strategic visual narratives conveying their political values and stances on political and policy issues of importance to their target audiences. This study contributes to understanding the role of visual politics in social media-based politicking and how this type of strategic communication builds on cultural cues to frame personalized political identities.
No abstract available
This article examines the role played by digital platforms in the transformation of the audio-visual industry in India. Are video-on-demand platforms contributing to India’s growing dependence on global players or are they asserting the diversification of domestic players and the progress of Indian capitalism in the cultural and digital industries? To answer, we analyse the strategies of competition and collaboration between historical audio-visual players versus communication players, the dynamics of foreign ownership and the content localisation strategies of global players. We conclude that the study of digital platforms offers an important insight into new forms of economic and cultural hegemony in the cultural industries.
Using the case of the 2019 European election, the study compares the visual self-depiction of female and male political candidates from all European Union’s 28 member states on social networking sites and their depiction in the news coverage. It thereby investigates to what degree the news coverage and politicians’ self-depiction employs visual gender stereotypes. Moreover, the study presents results on differences in the depiction of male and female candidates across party lines. With the help of computational vision, we demonstrate that, while differences between progressive and conservative candidates are scarce, there are clear differences in the depiction of female and male politicians. These differences resemble emotional gender stereotypes, especially since women are more often depicted as happy. Overall, the study demonstrates that female political communication is still distinct from male political communication for both their self-representation as well as the media’s portrayal of political candidates.
This article reviews images of people of Asian descent wearing masks in popular press articles discussing mask shortages and argues that visual framing had the potential of fueling racial antagonism during the initial months of COVID-19’s spread across the United States. Technical communicators need to include globalized perspectives in educational materials about masks as an advocacy strategy that can help communities and individuals to navigate the crisis situation and better protect themselves and those around them.
最终分组结果构建了一个从技术层面(NLP检测、生成式AI分析)到行为层面(国家行为体、协调性不实行为)再到社会心理与治理层面(认知偏见、法律规制、社会韧性)的完整知识体系。特别强调了视觉政治传播与特定领域实证(如医疗、气候)的独特性。该分类方案有效整合了多语言、多模态的研究趋势,并涵盖了从基础理论到宏观政策的虚假信息治理闭环。