政治传播中的计算叙事研究
计算叙事的方法论创新与政治文本挖掘
该组文献聚焦于计算社会科学的方法论工具,探讨如何利用大语言模型(LLM)、自监督情感分析、主题建模(LDA)及语料库工具,对政治立场、议会辩论及意识形态叙事进行精准提取与量化分析。
- Self-Supervised Sentiment Analysis in Spanish to Understand the University Narrative of the Colombian Conflict(Paula Andrea Rendón Cardona, Julián Gil-González, Páez Valdez Julián, Mauricio Rivera Henao, 2022, Applied Sciences)
- The Development and Psychometric Properties of LIWC2015(James W. Pennebaker, Cindy K. Chung, Molly Ireland, Amy Gonzales, 2015, Texas ScholarWorks (Texas Digital Library))
- Learning extraction patterns for subjective expressions(Ellen Riloff, Janyce Wiebe, 2003, No journal)
- Leveraging Social Network Analysis and Cyber Forensics Approaches to Study Cyber Propaganda Campaigns(Samer Al-khateeb, Muhammad Nihal Hussain, Nitin Agarwal, 2018, Lecture notes in social networks)
- A model for the Twitter sentiment curve(Giacomo Aletti, Irene Crimaldi, Fabio Saracco, 2021, PLoS ONE)
- An Artificial Intelligence Application of Theme and Space in Life Writings of Middle Eastern Women(Nurul Najiha Jafery, Pantea Keikhosrokiani, Moussa Pourya Asl, 2022, Advances in computational intelligence and robotics book series)
- Can Large Language Models Transform Computational Social Science?(Caleb Ziems, William A. Held, Omar Ahmed Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang, 2023, Computational Linguistics)
- 政治语篇翻译的叙事建构(覃玉荣, 王 卓, 郑 雅, 陈巧钰, 2018, 现代语言学)
- From text to political positions : text analysis across disciplines(Piek Vossen, 2014, No journal)
- Topic Modeling in Management Research: Rendering New Theory from Textual Data(Timothy R. Hannigan, Richard Franciscus Johannes Haans, Keyvan Vakili, Hovig Tchalian, Vern Glaser, Milo Shaoqing Wang, Sarah Kaplan, P. Devereaux Jennings, 2019, Academy of Management Annals)
- Get out the vote(Matt Thomas, Bo Pang, Lillian Lee, 2006, No journal)
计算宣传、信息操纵与数字威权主义
该组文献关注数字媒介环境中的恶意叙事与信息干预,包括社交机器人识别、协同式虚假信息传播、互联网水军对异见人士的打压,以及威权国家利用数字技术实施的舆论控制与外交渗透。
- Can Public Diplomacy Survive the Internet?: Bots, Echo Chambers, and Disinformation(Shawn Powers, Markos Kounalakis, 2017, Insecta mundi)
- Disinformation as Collaborative Work(Kate Starbird, Ahmer Arif, Tom Wilson, 2019, Proceedings of the ACM on Human-Computer Interaction)
- From COVID-19 Treatment to Miracle Cure(Stephanie Alice Baker, Alexia Maddox, 2022, M/C Journal)
- Online Human-Bot Interactions: Detection, Estimation, and Characterization(Onur Varol, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, Alessandro Flammini, 2017, Proceedings of the International AAAI Conference on Web and Social Media)
- This Just In: Fake News Packs A Lot In Title, Uses Simpler, Repetitive Content in Text Body, More Similar To Satire Than Real News(Benjamin D. Horne, Sibel Adalı, 2017, Proceedings of the International AAAI Conference on Web and Social Media)
- Internet Trolls against Russian Opposition: A Case Study Analysis of Twitter Disinformation Campaigns against Alexei Navalny(Iuliia Alieva, Kathleen M. Carley, 2021, 2021 IEEE International Conference on Big Data (Big Data))
- Deflating the Chinese balloon: types of Twitter bots in US-China balloon incident(Lynnette Hui Xian Ng, Kathleen M. Carley, 2023, EPJ Data Science)
- Digital Authoritarianism in the Middle East: Deception, Disinformation and Social Media(Gabriele Cosentino, 2022, Bustan The Middle East Book Review)
跨文化叙事构建与国家形象传播策略
该组文献研究国家形象(特别是中国)的国际传播路径,涉及“一带一路”战略叙事、文化维度差异对传播效果的影响、外国青年视角下的国家认同以及在不同媒介环境下的叙事适配策略。
- 基于文化维度与语境理论的中国故事跨文化传播策略研究(全继刚, 许子言, 2026, 现代语言学)
- Analyzing and Strategizing the Belt and Road Initiative Discourse on Twitter: A Topic Mining and Social Network Approach(Haisheng Hu, 2025, Chinese Political Science Review)
- 基于情感分析的中国网络文学海外传播研究——以《凡人修仙传》为例(吴祎波, 2025, 世界文学研究)
- “他者”视角下中国文化传播与国家形象构建研究(沈 玥, 马璐玮, 2025, 现代语言学)
- 云南自媒体在国际社交媒体平台的跨文化传播效果影响因素分析(朱辰熹, 石成成, 范秋桐, 2024, 社会科学前沿)
- 评价理论视角下青少年抑郁症的新闻报道话语分析(张凌云, 2023, 现代语言学)
社交媒体舆情动力学、情感动员与公共框架
该组文献探讨特定社交平台(如Twitter, 抖音, YouTube)上的舆情演化,重点分析公共议题(疫苗、移民、社会运动)中的情感动员、叙事框架演变以及集体认同的构建机制。
- 抖音平台热点事件中的情感动员与抗争实践——以公众文案接力传播现象为例(崔俊丽, 姜洪伟, Unknown Journal)
- 社交媒体舆情信息热度和情感强度对传播意愿的影响——情绪与感知可信度的双重中介机制(杨 颖, Unknown Journal)
- 基于LDA主题模型的网络谣言事件分析与策略处置(徐仙伟, 冯培尧, 2025, 计算机科学与应用)
- #AllforJan: How Twitter Users in Europe Reacted to the Murder of Ján Kuciak—Revealing Spatiotemporal Patterns through Sentiment Analysis and Topic Modeling(Tamás Kovács, Anna Kovács-Győri, Bernd Resch, 2021, ISPRS International Journal of Geo-Information)
- The first two months in the war in Ukraine through topic modeling and sentiment analysis(Clara Maathuis, Iddo Kerkhof, 2023, Regional Science Policy & Practice)
- Agenda-setting effects for covid-19 vaccination: Insights from 10 million textual data from social media and news articles using BERTopic(Hyunsang Son, Young Eun Park, 2025, International Journal of Information Management)
- Strange Frame Fellows: The Evolution of Discursive Framing in the Opt-Out Testing Movement(Richard Paquin Morel, 2021, Teachers College Record The Voice of Scholarship in Education)
- “Why Drones for Ordinary People?” Digital Representations, Topic Clusters, and Techno-Nationalization of Drones on Zhihu(Andrea Hamm, Zihao Lin, 2019, Information)
- A Text Mining Approach to Determinants of Attitude Towards Syrian Immigration in the Turkish Twittersphere(Hüseyin Zeyd Koytak, Muhammed Hasan Çelik, 2022, Social Science Computer Review)
政治传播理论模型与宏观治理韧性
该组文献从理论架构与宏观安全层面出发,探讨政治传播中的权力互动(如级联激活模型)、争议制图法,以及互联网在应对全球危机中表现出的复杂性叙事与系统韧性。
- Cascading Activation: Contesting the White House's Frame After 9/11(Robert M. Entman, 2003, Political Communication)
- Controversy Mapping.A Field Guide(Tommaso Venturini, Anders Kristian Munk, 2021, VBN Forskningsportal (Aalborg Universitet))
- Resilience, Emergencies and the Internet(Mareile Kaufmann, 2017, No journal)
智能技术驱动的思政教育与话语重构
该组文献聚焦于中国语境下思想政治教育的数字化转型,探讨生成式人工智能、网络圈层治理等新技术如何改变育人格局,并提升主流政治叙事在青年群体中的引领力。
- 网络圈层化视域下高校思想政治教育话语的路径优化(樊 雪, 夏小华, 2025, 教育进展)
- 生成式人工智能赋能“大思政课”创新路径研究(吴莲红, 2025, 社会科学前沿)
- 5W传播模式下的高校思想政治教育网络传播力评价研究(李 莹, 2022, 教育进展)
本次合并产出的研究框架系统地涵盖了政治传播从微观技术工具到宏观治理叙事的全貌。核心趋势表现为:计算方法(LLM、多模态分析)正深度嵌入政治文本分析;研究重心从单一的舆情监测转向对计算宣传与信息操纵的防御性研究;国家形象叙事更强调跨文化适配与多维视角。同时,报告指出了智能技术在教育转型与社会韧性构建中的关键作用,体现了政治传播研究在计算时代正向着更具预测性、干预性和治理效能的方向发展。
总计40篇相关文献
本研究聚焦于2021~2023年“看中国·外国青年影像计划”中的“金目奖”获奖短片,采用语料库方法探讨外国青年视角下中国文化的传播与国家形象的构建。采用Fairclough三维分析模型为主,多模态话语分析理论框架为辅的混合研究方法,本研究主要考察这些短片中的文字模态,结合声音、图像等模态系统解析中国文化传播与中国形象构建,描述这三年间外国青年对中国文化关注点的变化趋势。研究发现:1) 短片中有关“茶”、“家庭”、“水稻”等表达高频出现,凸显了外国青年对中国文化的特别关注;2) 通过短片制作,跨文化叙事者身份与新媒体传播方式增强了文化传播的真实性与广度。本研究有助于更全面地了解中国如何采用多模态方式提高中华文化传播效能、推进国家软实力提升。
研究云南自媒体在国际社交媒体平台的跨文化传播效果影响因素对于云南文化走出去,建设面向南亚东南亚的辐射中心有一定的指导意义。本文基于详尽可能性模型,运用内容分析和回归分析对YouTube平台7个云南自媒体账号中的427条视频进行分析测算。研究结果表明,视频的内容主题、标题策略、标题长度、标签、时长、语言、字幕、情感倾向、叙事策略、叙事风格和是否为系列视频都会对传播效果产生显著影响。根据研究结果得出深耕内容主题、注重表达技巧、优化制作流程的实践启示。
研究基于《凡人修仙传》小说英译本评论文本,利用情感分析探讨中国网络文学的海外传播,发现小说积极评论占比最高,多数读者对小说中修仙主题和中国文化表现出浓厚兴趣,但也部分关于小说情节和人物的负面评论。研究表明情感分析技术能有效量化跨文化传播中的读者反馈,也反应了中国网络文学在跨文化传播中具有重要意义。
新媒体环境下网络谣言传播速度快、影响范围广,对社会稳定与公安机关公信力构成严重威胁。本文以“秦朗巴黎丢作业”事件为案例,采用LDA主题模型对该事件的相关评论进行文本分析,通过信息搜集方法与主题建模,揭示网络谣言的舆情特征与传播规律。并在此基础上从谣言查证、真相公布、谣言清除等六个维度指出了网络谣言综合治理策略体系,为公安机关处置类似案件提供理论依据与实践参考。
网络圈层基于业缘、趣缘、地缘等特征聚合而成,已经成为当代青年社交的重要栖息地。网络圈层社交虽然在一定程度上能够满足大学生精神文化需求,但也需要警惕因圈层壁垒导致的高校思想政治教育话语主体主导力被弱化,内容吸引力被淡化,策略有效性被虚化等一系列风险。为此可以从优化话语主体、话语内容、话语环境三方面入手,通过塑造思想政治教育工作者网络圈层化形象,联动意见领袖与朋辈群体,强化主体认同;通过创新议题与符号设置,情理交融地进行话语表达,增强话语内容吸引力;通过利用智能技术搭建高校网络互动平台,过滤不良信息来优化话语环境,切实提升高校思想政治教育话语引领力。
叙事理论融入翻译研究,并用于公共叙事如外宣、新闻、国际政治、战争等非文学翻译实践,主要以莫娜•贝克为代表。以时空语域、素材选择性采用、标示式、人物事件再定位等翻译叙事建构为策略,结合软件AntConc.3.4.4的统计与分析,逐步呈现2016年中国政府工作报告英译本在语境、关键词选用、译者与受众互动等宏观与微观翻译的叙事建构特征,是一个全新的尝试。
中国故事的国际传播效果常受制于深层的文化差异。为系统解析并回应这一挑战,本研究创新性地整合了霍夫斯泰德的文化维度理论与霍尔的高低语境理论,构建了一个“价值观念–传播符号”双维分析框架。该框架旨在超越单一理论视角,从价值逻辑与信息编码双重层面,诊断中国故事在跨文化传播中面临的结构性适配困境。研究通过构建覆盖八国的“国别化语言数据库”与实施跨文化控制实验进行实证检验。结果表明,中国故事在西方文化中遭遇的接受障碍,本质上是“集体主义–高语境”叙事同“个人主义–低语境”解码习惯之间的系统性错位;叙事策略与目标受众文化坐标(维度与语境)的匹配度,显著正向影响传播的认知、情感与行为效果。基于此,本研究提出一个系统化的优化策略体系,主张从“单向输出”转向“双向精准适配”,即在精准文化诊断的基础上,实施叙事逻辑分层、传播符号平衡、体系支撑协同与技术平台赋能的复合策略。本研究的理论贡献在于推动了跨文化传播理论的整合与应用深化,实践意义则为提升中国故事国际传播的精准性、亲和性与实效性提供了具操作性的路径参考。
生成式人工智能是未来的核心技术,“大思政课”是新时代背景下对思想政治教育理论与实践的创新探索,旨在构建更加全面、深入、立体化的育人格局。通过生成式人工智能技术支撑,重构历史叙述、教学方法与协同机制,推动“大思政课”教育向智能化转型。基于智能算法的虚实融合实践教学场域,实现了理论认知与价值内化的深度耦合,破解了传统育人模式的时空局限。生成式人工智能技术整合教育资源,构建“全员参与、全程贯通、全域覆盖”的协同育人格局,提升“三全育人”实效。然而,技术应用面临数据隐私泄露、算法伦理失范等挑战,需建立价值校准、风险防控、效能评估治理体系,将社会主义核心价值观嵌入技术逻辑,确保人工智能服务于“为党育人、为国育才”根本目标。
舆情信息内容特征是影响社交媒体舆情传播的重要因素。本文基于情绪–认知双重加工系统的视角,探讨了社交媒体舆情信息的热度和情感强度两个核心内容特征对传播意愿的影响,以及影响的内在过程机制。研究结果发现:1) 社交媒体舆情信息热度对舆情传播意愿的直接影响和通过情绪中介作用的间接影响均不显著,但通过感知可信度的中介作用对舆情传播意愿的间接影响显著;2) 社交媒体舆情信息情感强度直接正向影响舆情传播意愿,还会通过情绪、感知可信度的双重中介作用间接影响舆情传播意愿。最后,本文对研究结论如何应用于舆情治理实践进行了探讨。
短视频平台的发展为热点新闻事件的发酵及公众情感的表达与传播提供了新的场域,其媒介影响日益复杂。在抖音短视频平台上,随着热点事件的发酵,频繁涌现出“文案 + 热点事件”形式的传播现象。这些煽情且富有感染力的文案极大调动了公众的传播积极性,使得热点事件在情感驱动下病毒式传播,逐渐形成社会性情感的舆论高潮,甚至引发声讨式的情感抗争。本文从“情感动员”和“情感抗争”的视角探讨了抖音平台上公众基于情感表达的文案创作及其引发的集体情感共鸣与传播现象。文章分析了这一现象的主要成因,包括平台构建的情感互动空间、煽情文案激发的集体情感体验、与社会生态相契合的情绪表达以及社会矛盾激化下的情感动员。同时,本文还探讨了这一现象的影响,如共情能量推动下的舆论热度提升、事件解决的倒逼效应以及情绪极化下抗争意义的消解。
新媒体时代赋予了高校思想政治教育工作新的环境和条件。提升高校思想政治教育网络传播力是实现高校思想政治教育工作与网络技术紧密结合,使其更新更活的重要途径。借鉴拉斯韦尔信息传播的“5W”理论,从网络传播主体、网络传播受众、网络传播内容、网络传播媒介、网络传播效果5个维度构建高校思想政治教育网络传播力评价指标体系,提出了基于深度学习的高校思想政治教育网络传播力评价方法,选取200个高校共青团微博进行实证研究。研究发现,高校思想政治教育网络传播力整体水平中等偏上,小部分高校思想政治教育网络传播力的提升空间较大;传播力较强的高校具备较高的影响力,及时有效地以多元化方式更新思想教育信息,与大学生形成良性互动,强化网络思政教育效果。
本研究自建国内主流报纸关于青少年抑郁症的相关专题新闻语料库,收集近三年新闻语料,在评价理论的框架下,借助UAM Corpus Tool标注工具对语料中的态度资源进行标注,探讨情感、判断、鉴赏三种态度资源在目标语料中的分布特征及其对公众认知和患者自我认知的影响。研究发现,宏观层面的态度资源分布特征为“鉴赏资源 > 情感资源 > 判断资源”,“青少年抑郁症患者”整体呈中性偏负面的群体形象。研究表明,媒体要更多地使用判断资源和积极鉴赏资源,增强公众对抑郁症群体的关注,使其正确对待抑郁症,并有效预防、治疗抑郁症。
"Controversy Mapping" shows how we can use social research to bring controversies back to the surface of knowledge and public life, and how it can help to recover the power of controversy to transform what's possible. The book provides everything you need – the ideas, examples, and techniques – to start doing controversy analysis.” Noortje Marres, University of Warwick “Venturini and Munk have produced a significant book that traces the genealogy of controversy mapping back to its origins in actor-network theory to its incarnations in digital methods. Through a lucid and engaging narrative and series of visualizations, they provide a comprehensive ‘field guide' to the major figures, theories, concepts, and methods that make up the practices of controversy mapping.” Evelyn Ruppert, Goldsmiths, University of London As disputes concerning the environment, the economy, and pandemics occupy public debate, we need to learn to navigate matters of public concern when facts are in doubt and expertise is contested. "Controversy Mapping" is the first book to introduce readers to the observation and representation of contested issues on digital media. Drawing on actor-network theory and digital methods, Venturini and Munk outline the conceptual underpinnings and the many tools and techniques of controversy mapping. They review its history in science and technology studies, discuss its methodological potential, and unfold its political implications. Through a range of cases and examples, they demonstrate how to chart actors and issues using digital fieldwork and computational techniques. A preface by Richard Rogers and an interview with Bruno Latour are also included. A crucial field guide and hands-on companion for the digital age, "Controversy Mapping" is an indispensable resource for students and scholars of media and communication, as well as activists, journalists, citizens, and decision makers.
Abstract As digitalization increases, countries employ digital diplomacy, harnessing digital resources to project their desired image. Digital diplomacy also encompasses the interactivity of digital platforms, providing a trove of public opinion that diplomatic agents can collect. Social media bots actively participate in political events through influencing political communication and purporting coordinated narratives to influence human behavior. This article provides a methodology towards identifying three types of bots: General Bots, News Bots and Bridging Bots, then further identify these classes of bots on Twitter during a diplomatic incident involving the United States and China. In the balloon incident that occurred in early 2023, where a balloon believed to have originated from China is spotted across the US airspace. Both countries have differing opinions on the function and eventual handling of the balloon. Using a series of computational methods, this article examines the impact of bots on the topics disseminated, the influence and the use of information maneuvers of bots within the social communication network. Among others, our results observe that all three types of bots are present across the two countries; bots geotagged to the US are generally concerned with the balloon location while those geotagged to China discussed topics related to escalating tensions; and perform different extent of positive narrative and network information maneuvers. The broader implications of our work towards policy making is the systematic identification of the type of bot users and their properties across country lines, enabling the evaluation of how automated agents are being deployed to disseminate narratives and the nature of narratives propagated, and therefore reflects the image that the country is being projected as on social media; as well as the perception of political issues by social media users.
The paper summarizes the nature of the LIWC2015 text analysis program, including the development of the dictionaries and the basic psychometrics of the output. Results of the 2015 version are compared with the 2007 version.
Report from a meeting held on the topic of disinformation, the Internet, and public diplomacy held at the Hoover Institution, Stanford University, in 2017.\nExecutive Summary\nScientific progress continues to accelerate, and while we’ve witnessed a revolution in communication technologies in the past ten years, what proceeds in the next ten years may be far more transformative. It may also be extremely disruptive, challenging long held conventions behind public diplomacy (PD) programs and strategies. In order to think carefully about PD in this ever and rapidly changing communications space, the Advisory Commission on Public Diplomacy (ACPD) convened a group of private sector, government, and academic experts at Stanford University’s Hoover Institution to discuss the latest trends in research on strategic communication in digital spaces. The results of that workshop, refined by a number of follow-on interviews and discussions, are included in this report. I encourage you to read each of the fourteen essays that follow, which are divided into three thematic sections: Digital’s Dark Side, Disinformation, and Narratives.\nDigital’s Dark Side focuses on the emergence of social bots, artificial intelligence, and computational propaganda. Essays in this section aim to raise awareness regarding how technology is transforming the nature of digital communication, offer ideas for competing in this space, and raise a number of important policy and research questions needing immediate attention. The Disinformation section confronts Oxford English Dictionary’s 2016 word of the year – “post-truth” – with a series of compelling essays from practitioners, a social scientist, and philosopher on the essential roles that truth and facts play in a democratic society. Here, theory, research, and practice neatly align, suggesting it is both crucial and effective to double-down on fact-checking and evidence-based news and information programming in order to combat disinformation campaigns from our adversaries. The Narrative section concludes the report by focusing on how technology and facts are ultimately part of, and dependent on, strategic narratives. Better understanding how these narratives form, and what predicts their likely success, is necessary to think through precisely how PD can, indeed, survive the Internet. Below are some key takeaways from the report.\nIn Defense of Truth\n• We are not living in a “post-truth” society. Every generation tends to think that the current generation is less honest than the previous generation. This is an old human concern, and should be seen today as a strategic narrative (see Hancock, p. 49; Roselle, p. 77). Defending the value and search for truth is crucial. As Jason Stanley notes (p. 71), “without truth, there is just power.”\n• Humans are remarkably bad at detecting deception. Studies show that people tend to trust what others say, an effect called the truth bias. This bias is actually quite rational—most of the messages that a person encounters in a day are honest, so being biased toward the truth is almost always the correct response (see Hancock, p.49).\n• At the same time people are also continuously evaluating the validity of their understanding of the world. This process is called “epistemic vigilance,” a continuous process checking that the information that a person believes they know about the world is accurate. While we have a difficult time detecting deception from interpersonal cues, people can detect lies when they have the time, resources, and motivation. Lies are often discovered through contradicting information from a third source, or evidence that challenges a deceptive account (see Hancock, p. 49).\n• Fact checking can be effective, even in hyper-partisan settings (see Porter, p. 55), and is crucial for sustained democratic dialogue (Bennett, p. 61; Stanley, p. 71). Moreover, it is possible, using digital tools, to detect and effectively combat disinformation campaigns in real time (Henick and Walsh, p. 65).\nComputational Propaganda\n• Computational propaganda refers to the coordinated use of social media platforms, autonomous agents and big data directed towards the manipulation of public opinion.\n• Social media bots (or “web robots”) are the primary tools used in the dissemination of computational propaganda. In their most basic form, bots provide basic answers to simple questions, publish content on a schedule or disseminate stories in response to triggers (e.g. breaking news). Bots can have a disproportionate impact because it is easy to create a lot of them and they can post a high-volume content at a high frequency (see Woolley, p.13).\n• Political bots aim to automate political engagement in an attempt to manipulate public opinions. They allow for massive amplification of political views and can empower a small group of people to set conversation agenda’s online. Political bots are used over social media to manufacture trends, game hashtags, megaphone particular content, spam opposition and attack journalists. The noise, spam and manipulation inherent in many bot deployment techniques threaten to disrupt civic conversations and organization worldwide (see Chessen, p.19).\n• Advances in artificial intelligence (AI) – an evolving constellation of technologies enabling computers to simulate cognitive processes – will soon enable highly persuasive machine-generated communications. Imagine an automated system that uses the mass of online data to infer your personality, political preferences, religious affiliation, demographic data and interests. It knows which news websites and social media platforms you frequent and it controls multiple user accounts on those platforms. The system dynamically creates content specifically designed to plug into your particular psychological frame and achieve a particular outcome (see Chessen, p. 39).\n• Digital tools have tremendous advantages over humans. Once an organization creates and configures a sophisticated AI bot, the marginal cost of running it on thousands or millions of user accounts is relatively low. They can operate 24/7/365 and respond to events almost immediately. AI bots can be programmed to react to certain events and create content at machine speed, shaping the narrative almost immediately. This is critical in an information environment where the first story to circulate may be the only one that people recall, even if it is untrue (see Chessen, p. 39).\n• PD practitioners need to consider the question of how they can create and sustain meaningful conversations and engagements with audiences if the mediums typically relied upon are becoming less trusted, compromised and dominated by intelligent machines.\n• Challenging computational propaganda should include efforts to ensure the robustness and integrity of the marketplace of information online. Defensively, this strategy would focus on producing patterns of information exchange among groups that would make them difficult to sway using techniques of computational propaganda. Offensively, the strategy would seek to distribute the costs of counter-messaging broadly, shaping the social ecosystem to enable alternative voices to effectively challenge campaigns of misinformation (see Hwang, p. 27). In the persuasive landscape formed by social media and computational propaganda, it may be at times more effective to build tools, rather than construct a specific message.\n• Practitioners are not alone in their concern about the escalating use of social bots by adversarial state actors. The private sector is, too. Social media platforms see this trend as a potentially existential threat to their business models, especially if the rise of bots and computational propaganda weakens users’ trust in the integrity of the platforms themselves. Coordination with private sector is key, as their policies governing autonomous bots will adapt and, thus, shape what is and isn’t feasible online.\nMoving Past Folk Theories\n• Folk theories, or how people think a particular process works, are driving far too many digital strategies. One example of a folk theory is in the prevalence of echo chambers online, or the idea that people are increasingly digitally walled off from one another, engaging only with content that fits cognitive predispositions and preferences.\n• Research suggests that the more users rely on digital platforms (e.g. Twitter and Facebook) for their news and information, the more exposure they have to a multitude of sources and stories. This remains true even among partisans (though to a lesser extent than non-partisans). It turns out we haven’t digitally walled ourselves off after all (see Henick and Walsh, p. 65).\n• Despite increased exposure to a pluralistic media ecosystem, we are becoming more and more ideological and partisan, and becoming more walled off at the interpersonal and physical layers. For example, marriages today are twice as likely to be between two people with similar political views than they were in 1960.\n• Understanding this gap between a robustly diverse news environment and an increasingly “siloed” physical environment is crucial to more effectively engaging with target audiences around the world. Interpersonal and in-person engagement, including exchange programs, remain crucial for effective PD moving forward (see Wharton, p. 7).\n• Despite this growing ideological divide, people are increasingly willing to trust one another, even complete strangers, when their goals are aligned (see the sharing economy, for example). This creates interesting opportunities for PD practitioners. Targeting strategies based on political attitudes or profiles may overshadow the possibility of aligned goals on important policy and social issues (see Hancock, p. 49).\nRethinking Our Digital Platforms and Metrics\nVirality – the crown jewel in the social media realm – is overemphasized often at the expense of more important metrics like context and longevity. Many
No abstract
With his highly engaging and painstakingly researched Digital Authoritarianism in the Middle East, Marc Owen Jones makes a much-needed addition to the field of post-truth and disinformation studies. The focus of the book on the MENA region—more specifically on the Gulf area—allows its author to provide a wealth of examples that demonstrate how sophisticated disinformation operations are not the prerogative of well-known purveyors of state-sponsored propaganda and deception such as Russia and China or of outlets operating within the populist and right-wing information ecosystems in the United States and in Europe. Deception as a tool of public opinion control and as an instrument of aggressive foreign policy has been embraced by a growing club of authoritarian or autocratic regimes in the Middle East: Saudi Arabia, UAE, Iran, Egypt, and Qatar, among others are also active contributors to the growing “deception order” (6) influencing both regional and global politics, which Owen details through a series of thoroughly analyzed case studies.The book’s focus on the Middle East, which a decade ago was the theater of a series of epochal popular uprisings fueled by the advent of digital technology, also offers the author the opportunity to present important caveats against the rhetoric of liberation technology, prominent during the Arab Spring. Owen Jones contends that the current rise of digital authoritarianism is inherently linked to the experience of the Arab Spring, which has prompted a backlash by many regimes in the region in the form of surveillance, censorship, and strict control of digital technology to prevent future uprisings. The author also vigorously debunks the somehow simplistic techno-utopianism prominent a decade ago by showing how the liberation paradigm hailed by progressive forces in the West and in the region has served, perhaps unwittingly, as a cover for the spread of neoliberal digital capitalism, eager to push a powerful and underregulated technology into problematic and politically volatile geographical contexts, regardless of the consequences.The field of disinformation, computational propaganda, and post-truth studies, which since 2016 has generated increased academic interest and research output, has clearly illustrated how digital media, especially social media platforms, are not necessarily liberating or emancipatory. Instead, they can be exploited to spread deceptive and manipulative communications in support of demagogic, populist, and authoritarian political actors. While there is now considerable literature on how the phenomenon is affecting Western democratic countries, as well as on how prominent autocratic or authoritarian regimes such as those ruling Russia and China use deception in both domestic affairs and foreign policy, still relatively few studies have extended their focus to include countries in the Middle East, a region with high digital technology adoption and very little safeguards to protect citizens from deception operations. Aware that a rapidly shifting global scenario, especially after the COVID-19 pandemic, requires new perspectives and vantage points on international relations, Digital Authoritarianism in the Middle East pushes the academic discourse and research on disinformation beyond the Cold War framework, which has traditionally pit Russia and China as the main forces undermining Western security. In the process, Owen Jones opens a plurality of fascinating and troubling perspectives on Middle East politics, to demonstrate how profoundly they have been influenced by authoritarian forces that have mastered the use of digital technology and how the fallout of such new forms of authoritarianism can have repercussions beyond the region.Owen Jones defines “digital authoritarianism” as “the use of digital information technology by authoritarian regimes to surveil, repress and manipulate domestic and foreign population” (2) through a plurality of different techniques such as cyberattacks, internet shutdowns, the use of bots and trolls to push or suppress narratives, and targeted persecution against journalists and users. While the book makes it clear that deception and disinformation are illiberal practices appearing in both democratic and authoritarian regimes; in the latter, such practices operate generally unfettered and unchallenged. The “truth decay” effect has been identified as one of the most problematic features of digital information ecosystems, where objective truths have been rendered plastic and slippery by a plurality of technological and cultural factors. In authoritarian countries, this phenomenon can be easily leveraged by powerful actors intent on misleading populations and strengthening their grip on political power. In the MENA region, and especially in the Gulf, deception via digital media—aided and abetted by loosely regulated communication technology—contributes to the perpetuation of political systems functioning through corruption, human rights abuse, and inequality.The harmful political effects of digital authoritarianism are not limited to the region but easily transcend borders and spill over into other world regions, with significant implications for foreign policy decision-making and global geopolitics. One of the book’s main arguments is that digital authoritarianism involves the “decoupling and despatialization of authoritarian practices” (11), which resonate beyond traditional state boundaries. Owen Jones discusses such practices as inherently transnational endeavors, due to the borderless nature of digital communications, through which new digital powers, nodes, and hubs can extend their influence globally. To understand why such deceptive practices are becoming so frequent and pervasive in the region, creating what the author calls a “Gulf post-truth moment” (13), the book examines the discursive, tactical, and strategic qualities of a significant body of deception operations that have emerged in the region since 2011.Specifically, Owen Jones identifies Saudi Arabia and the United Arab Emirates as the primary drivers of digital authoritarianism in the Gulf. Saudi Arabia is presented in the book as a digital media superpower, launching deceptive and manipulative influence operations in a sustained manner on both a domestic and an international scale. The geopolitical context within which the Saudi Kingdom developed into a main player in the global field of deception operations is defined by two main elements: a new era of Gulf politics, jump-started by the Trump administration and characterized by renewed pressure on Iran and on the normalization of the relation between Israel and various Gulf countries. This, according to the author, has provided fertile ground for the seeding of disinformation and deceptive narratives into the media ecosystems of the region at the service of a new geopolitical vision spearheaded by autocratic and at times tyrannical leaders such as Mohammed bin Salman of Saudi Arabia and Mohammed bin Zayed of Abu Dhabi. Both rulers seek to carve a place of prominence for their countries in Gulf politics, and to this end they have also fueled a rise in disinformation operations. The geopolitical vision pushed forth synergistically by these leaders, sometimes in coordination with right-wing sections of the American political spectrum, is predicated on a permanent state of mobilization of their public opinion against a perceived threat represented by hostile political actors such as Qatar, Turkey, Iran, and Islamist organizations such as the Muslim Brotherhood.Before delving into some of the many examples that the author uses to support these claims, it is worth further probing the theoretical framework that Owen Jones lays out in the introductory chapters as a foundation for his empirical work. It is also worth appreciating his methodological approach to the study of disinformation and post-truth. The lack of an in-depth theorization of the notion of “post-truth” is probably the main weakness of an otherwise outstanding book. In his discussion of the concept, Owen Jones doesn’t acknowledge the existence of a recent body of literature that has discussed post-truth as a political and cultural phenomenon rooted in the decline of the Foucauldian “regime-of-truths” traditionally enforced by legacy media and cultural or scientific institutions in Western liberal democracies,1 in the emergence of fictional counter-narratives such as conspiracy theories by technologically empowered publics,2 in the epistemic relativism that some scholars trace back to the postmodern turn in politics and culture,3 and in the crisis of authority of Western democratic politics and values in the global geopolitical arena.4Neglecting such a multilayered cultural and political dimension of the term post-truth, which also gives publics and audiences a role in producing and participating in fictional narratives, Owen Jones takes a more traditional political economic approach in discussing the most salient aspects of the Gulf post-truth moment. The author seems particularly concerned with the alignment between authoritarian regimes and global technology companies. In Owen Jones’s engaging but ultimately bleak view, the Middle East appears as a “Wild West” for disinformation, dominated by despotic regimes and completely subjugated to the neoliberal logic that has fueled the rise of the data extractive business models underpinning commercial social media platforms such as Facebook and Twitter. The extraction of self-disclosed data and personal information by platform users is not only profitable for the technology companies, but also necessary for authoritarian regimes seeking to maintain control of the population, since profiles of subjects can be used to monitor and control citizens’ behavior and opinions.The “datafication” of users, or their transformation into collections of data points that can be used for a plurality of manipulative and predictive ends, is thus of primary interests for both Western private corporations and Middle Eastern authoritarian rulers. Both resist attempts to protect users’ privacy, as they would hinder advertising revenues and the governments’ surveillance abilities. In Owen Jones’s reading, a capitalist model based on data mining and information extraction can lead to new forms of “techno-colonialism” (17) or the exploitation of a poorer country by a richer one through technology, as well as to the strengthening of existing authoritarian regimes.The analysis is correct, but perhaps laying part of the blame for the rise of deception and authoritarianism in the region at the feet of Western neoliberalism and Western technology companies might appear to be not only a Western-centric conclusion, but also a deterministic one, overly emphasizing the importance of technology and of its business models to the detriment of a more nuanced cultural analysis of people’s engagement with technology, which should also consider individual gratification, identify formation, and social-bonding that digital media provides to its users.While I agree with Owen Jones that the narrative of liberation technology prominent a decade ago in the region appears now anachronistic, as well as overly deterministic and dubiously instrumental to profit-seeking ventures, I also think it might be premature to dismiss the liberating element of digital networking technology, which has demonstrated the ability to empower and mobilize citizens in the past—in some cases leading them to previously unthinkable political outcomes—and to this day continues to provide outlets, albeit restricted and closely monitored, to express their views on culture, religion, sexuality and also politics.Where the author really excels and offers his most useful contribution to the field of disinformation research is the part in which he presents his sophisticated methodology to study deception operations and puts it at the service of a vast selection of studies and investigations on disinformation in the region. Combining a wide array of tools and skills, the author uses both qualitative and quantitative methods to conduct the research showcased in the book, including “digital ethnography, open-source research, as well as computer-assisted analysis of datasets, including anomaly detection, corpus analysis, network analysis and, well, good old-fashioned investigative work” (19).The platform of choice for most of the case studies in the book was Twitter, which allows generous access to its data for research via the Application Programming Interface (API), and which has allowed the author to gather millions of tweets and hundreds of hashtags to study the function and reach of deception operations. What also impresses about the book, on top of the technological savvy demonstrated by the author, is the narrative flare with which Owen Jones recounts his investigation in the dark corners of social media, where he has spent a considerable amount of time chasing trolls, unmasking fake journalists, exposing sock puppet accounts, and detecting large-scale information operations by automated bots. Faithful to the academic approach of public impact scholarship, which seeks to “create social change through the translation and dissemination of research to non-academic audiences” (22), Owen Jones’s multiple investigations are narrated with rigorous analysis, political engagement, as well as with humor.Among the multiple case studies discussed in the book, I chose to focus on a couple discussing the role of Saudi Arabia’s growing digital media power in shaping the deception order in the Gulf. In Owen Jones’s definition, digital media power can be summarized “as an actor’s ability to use or co-opt digital media technologies in order to assert ideological influence and power over a community” (81). Owen Jones argues that the manipulation of social media to promote propagandistic narratives and to suppress criticism of the Saudi regime has become a key element of Mohammed bin Salman’s vision for Saudi Arabia. While Saudi Arabia attempts to dominate the Middle East and Arabic-language media industry date back to the 1990s, it was after the Arab Spring and especially with the spread of social media in the country that the Kingdom’s tactics and strategies to expand its media power, also through deception operations, have evolved in reach and sophistication.Saudi Arabia is one of countries in the world with the highest penetration of digital technology, with some of the highest numbers of social media users, and with a very young population that forms a potentially volatile “youth bulge” using social media as a space for discussion and information consumption. As argued by Owen Jones, managing and pacifying its youth is one of the cornerstones of the Kingdom’s security strategy to maintain a hold on power. The deployment of digital media power to praise the country’s leadership and to attack or silence critics of the ruling dynasty has been one of the central tenets of Mohammed bin Salman’s rise to power.A technique used to boost Saudi popularity both regionally and globally, especially during the Kingdom’s UN-sanctioned war in Yemen was that of “astroturfing,” or manufacturing the illusion of a vox populi, through sock-puppet accounts (fake social media profiles) and bots (nonhuman automated accounts), which were instructed to support specific narratives or to censor sensitive topics through distracting content. For example, hashtags in Arabic carrying messages in support of Mohammed bin Salman during his visit to London in 2018 were made to trend, in order to give the illusion of international grassroots support, thanks to the coordinated work of hundreds of fake accounts with Western-sounding names. In his research of this deception operation, Owen Jones estimated that at least 30 percent of the accounts promoting such pro-Saudi hashtags were either sock-puppets or bots, known in the region as “electronic flies.”Still in the context of Saudi’s war in Yemen, the book discusses the controversial role of Mohammed bin Salman’s right hand, Saud Al-Qahtani, in orchestrating Saudi’s deceptive operations via social media. The book points to Al-Qahtani’s involvement in managing pro-regime “troll farms”—often drawing manpower from the unemployed and digitally active Saudi youth—to solicit services from international hackers to develop software that could either delete and promote social media posts about Saudi involvement in Yemen and to suspend and hack the Twitter account of Medicines Sans Frontier, a humanitarian organization that had exposed Saudi’s war crimes in Yemen.Another chapter discusses how the abundant disinformation circulating around the coronavirus was exploited, especially in the earlier phase of the pandemic, to further foreign policy objectives of some Gulf states. Specifically, the chapter examines how actors connected to Saudi Arabia and the United Arab Emirates used coronavirus disinformation to attack regional opponents. Among the multiple examples provided, it is worth recounting that of an information operation taken down by Twitter that revealed how accounts connected to Saudi Arabia, UAE, and Egypt had targeted Qatar—specifically its national airline, Qatar Airways—accusing it of spreading the new virus around the world because of negligence and incompetence. Qatar was also the target of outlandish claims by a pro-UAE journalist, who accused the country of having financed China’s engineering of the virus and for deliberately spreading the virus in the region to damage the Emirati and Saudi economies. Owen Jones rightly points out that social media platforms took a tougher stance on combating health disinformation during the pandemic, often in coordination with the World Health Organization. However, less scrutiny was given by the platforms to the false information on the pandemic circulating in non-anglophone markets, which allowed, especially in 2020, such deceptive narratives to spread unfettered in the Gulf region.One last example appears in the chapter dedicated to the deceptive methods used by Saudi-linked entities to manipulate public opinion in the aftermath of the gruesome murder of Saudi journalist Jamal Khashoggi, who, especially via his collaboration with The Washington Post, had expressed criticism of the reforms initiated by Mohammed bin Salman. The murder of Khashoggi inside a Saudi consulate in Istanbul marked the most blatant and tragic episode of a Middle Eastern government silencing a critical journalist. However, the event had serious consequences for Mohamed bin Salman’s efforts to brand himself as a progressive reformer in the eyes of the world. The vast amount of international media coverage and the near-total condemnation that the murder elicited around the globe put the Saudi propaganda machine to the test, forcing it to go into in to control of the narrative around the that to the murder of the aftermath of the as hashtags to on Twitter Saudi Arabia in the of the journalist, “electronic at the service of the Saudi government to manipulate the Twitter to promote narratives Saudi from to those a Saudi and to the of the journalist. the most by Owen Jones in recounting the case is the was a a of bots, trolls and who was because he to with the propaganda pushed by millions of social media accounts and because he the media order by Mohammed bin Authoritarianism in the Middle East is a significant in the study of disinformation and computational propaganda, a necessary and in the not only because it the of the research beyond the Western world and its well-known Russia and but also because it provides a fascinating and in-depth into the of a with an of the Gulf region who can critical and technological to deception operations.
1. Foreword (by Kaal, Bertie) 2. Positions of Parties and Political Cleavages between Parties in Texts (by Kleinnijenhuis, Jan) 3. PART I. Computational Methods for Political Text Analysis 4. PART I: Introduction (by Vossen, Piek) 5. Comparing the Position of Canadian Political Parties using French and English Manifestos as Textual Data (by Collette, Benoit) 6. Leveraging Textual Sentiment Analysis with Social Network Modelling: Sentiment Analysis of Political Blogs in the 2008 U.S. Presidential Election (by Gryc, Wojciech) 7. Issue Framing and Language Use in the Swedish Blogosphere: Changing Notions of the Outsider Concept (by Dahlberg, Stefan) 8. Text to Ideology or Text to Party Status? (by Hirst, Graeme) 9. Sentiment Analysis in Parliamentary Proceedings (by Grijzenhout, Steven) 10. The Qualitative Analysis of Political Documents (by Wesley, Jared J.) 11. PART II. From Text to Political Positions via Discourse Analysis 12. PART II: Introduction (by Koller, Veronika) 13. The Potential of Narrative Strategies in the Discursive Construction of Hegemonic Positions and Social Change (by Montesano Montessori, Nicolina) 14. Christians, Feminists, Liberals, Socialists, Workers and Employers: The Emergence of an Unusual Discourse Coalition (by Eleveld, Anja) 15. Between Union and a United Ireland: Shifting Positions in Northern Ireland's Post-Agreement Political Discourse (by Filardo-Llamas, Laura) 16. Systematic Stylistic Analysis: The Use of a Linguistic Checklist (by Leeuwen, Maarten van) 17. Participation and recontextualisation in New Media: Political Discourse Analysis and YouTube (by Boyd, Michael S.) 18. PART III. Converging methods 19. PART III: Introduction (by Cienki, Alan) 20. From Text to the Construction of Political Party Landscapes: A Hybrid Methodology Developed for Voting Advice Applications (by Krouwel, Andre) 21. From Text to Political Positions: The Convergence of Political, Linguistic and Discourse Analysis (by Elfrinkhof, Annemarie van) 22. About the authors 23. Index
Discussion about the interference of Russian actors in the 2016 U.S. presidential election campaign attracted enormous attention from the academic community. Numerous studies dedicated to the analysis of Internet operations, as well as activities of bots and trolls, formed a new interdisciplinary area that investigates online disinformation and computational propaganda. This study provides an analysis of a case study with Russian propaganda operations that focus on the internal political confrontation between the Russian systemic political establishment and opposition movement of Alexei Navalny. We present an analysis of how Internet trolls and sockpuppets are used to conduct information disorder activities in order to frame the discussion around the opposition movement in Russia on Twitter. We also identified attempts to manipulate the opinion of the Western audience and to spread disinformation about Western democracies by the same malicious actors. The study implements network analysis for identifying disinformation and propaganda trolls.Preliminary findings demonstrate that there is evidence of information campaigns against Alexei Navalny as one of the leaders of the Russian opposition. We observe how an internal issue is framed in the context of Russian confrontation with the West and how it is used to promote hostile narratives with the claims that Alexei Navalny is supported by the Western governments and therefore is an enemy of the Russian state. Many agents from our sample pretend to be real people, English speakers, who exhibit hostile attitudes towards Navalny and the Western democracies, promoting a lack of trust in the democratic institutions as well as spreading disinformation and conspiracy theories.
This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision.
Increasingly, management researchers are using topic modeling, a new method borrowed from computer science, to reveal phenomenon-based constructs and grounded conceptual relationships in textual data. By conceptualizing topic modeling as the process of rendering constructs and conceptual relationships from textual data, we demonstrate how this new method can advance management scholarship without turning topic modeling into a black box of complex computer-driven algorithms. We begin by comparing features of topic modeling to related techniques (content analysis, grounded theorizing, and natural language processing). We then walk through the steps of rendering with topic modeling and apply rendering to management articles that draw on topic modeling. Doing so enables us to identify and discuss how topic modeling has advanced management theory in five areas: detecting novelty and emergence, developing inductive classification systems, understanding online audiences and products, analyzing frames and social movements, and understanding cultural dynamics. We conclude with a review of new topic modeling trends and revisit the role of researcher interpretation in a world of computer-driven textual analysis.
Abstract Large language models (LLMs) are capable of successfully performing many language processing tasks zero-shot (without training data). If zero-shot LLMs can also reliably classify and explain social phenomena like persuasiveness and political ideology, then LLMs could augment the computational social science (CSS) pipeline in important ways. This work provides a road map for using LLMs as CSS tools. Towards this end, we contribute a set of prompting best practices and an extensive evaluation pipeline to measure the zero-shot performance of 13 language models on 25 representative English CSS benchmarks. On taxonomic labeling tasks (classification), LLMs fail to outperform the best fine-tuned models but still achieve fair levels of agreement with humans. On free-form coding tasks (generation), LLMs produce explanations that often exceed the quality of crowdworkers’ gold references. We conclude that the performance of today’s LLMs can augment the CSS research pipeline in two ways: (1) serving as zero-shot data annotators on human annotation teams, and (2) bootstrapping challenging creative generation tasks (e.g., explaining the underlying attributes of a text). In summary, LLMs are posed to meaningfully participate in social science analysis in partnership with humans.
Background/context In recent years, opposition to accountability policies and associated testing has manifested in widespread boycotts of annual tests—mobilized as the “opt-out movement.” A central challenge facing any movement is the need to recruit and mobilize participants. Key to this process is framing—a discursive tactic in which activists present social issues as problems that require collective action to solve. Such framing often relies on compatible political and ideological commitments among activists and potential recruits. Yet the opt-out movement has successfully mobilized widespread boycotts in diverse communities. How have participants in the movement framed issues relating to testing and accountability? Purpose/objective/research question/focus of study I explore the discursive tactics of participants in the opt-out movement by analyzing how they frame issues related to testing and accountability over time. I ask two research questions: (1) What frames did participants in opt-out-aligned social media groups use to convince others that standardized accountability tests are a problem and build support for the movement? (2) To what extent and how did the deployment of frames change over time? Research design I conducted a mixed-methods study combining qualitative content analysis to identify frames and computational analysis to describe their co-deployment over time. Data collection and analysis I compiled a text corpus of posts to opt-out-aligned social media pages from 2010–2014. I analyzed posts using open coding to identify frames used by participants in online communities. Frames were categorized by their orientation—the general way in which they framed the problem of testing and accountability. I then analyzed the co-deployment of frames using network analysis and hierarchical clustering. Conclusions/recommendations The longitudinal analysis of frames reveals key differences in the frames used by participants. While more politically oriented frames—those characterizing testing as a social issue affecting the public schools at large—were common in early stages of the movement, less overtly political frames—those characterizing testing as an individual issue affecting children and local schools or a technical issue—became more prominent over time. Over time, socially oriented frames became decoupled from other frames, showing independent patterns of deployment. This suggests that the movement may have benefited from de-emphasizing politically oriented frames, but that it lacked an overarching shared narrative, which has the potential to limit how it might affect accountability policies and testing.
Twitter is among the most used online platforms for the political communications, due to the concision of its messages (which is particularly suitable for political slogans) and the quick diffusion of messages. Especially when the argument stimulate the emotionality of users, the content on Twitter is shared with extreme speed and thus studying the tweet sentiment if of utmost importance to predict the evolution of the discussions and the register of the relative narratives. In this article, we present a model able to reproduce the dynamics of the sentiments of tweets related to specific topics and periods and to provide a prediction of the sentiment of the future posts based on the observed past. The model is a recent variant of the Pólya urn, introduced and studied in Aletti and Crimaldi (2019, 2020), which is characterized by a "local" reinforcement, i.e. a reinforcement mechanism mainly based on the most recent observations, and by a random persistent fluctuation of the predictive mean. In particular, this latter feature is capable of capturing the trend fluctuations in the sentiment curve. While the proposed model is extremely general and may be also employed in other contexts, it has been tested on several Twitter data sets and demonstrated greater performances compared to the standard Pólya urn model. Moreover, the different performances on different data sets highlight different emotional sensitivities respect to a public event.
Recently, the revolutionary transformations in social and political landscapes as well as the remarkable developments in artificial intelligence reinforced the importance of geography and spatial analyses in literary and cultural studies. This chapter proposes an analytical framework of topic modelling and sentiment analysis for exploring the connection between theme, place, and sentiment in 36 autobiographical narratives by or about women from the Middle East. In the proposed framework, a latent Dirichlet allocation and latent semantic analysis algorithm from topic modelling, TextBlob library for sentiment analysis are employed to detect the place names that come together and to point out the associated themes and emotions throughout the data source. The model gives a scoring of each topical clusters and reveals that the diasporic authors are more likely to write about their hometown than their current host land. The authors hope that the merging of topic modelling and sentiment analysis would be beneficial to literary critics in the analysis of long texts.
No abstract
No abstract
Social media platforms such as Twitter are considered a new mediator of collective action, in which various forms of civil movements unite around public posts, often using a common hashtag, thereby strengthening the movements. After 26 February 2018, the #AllforJan hashtag spread across the web when Ján Kuciak, a young journalist investigating corruption in Slovakia, and his fiancée were killed. The murder caused moral shock and mass protests in Slovakia and in several other European countries, as well. This paper investigates how this murder, and its follow-up events, were discussed on Twitter, in Europe, from 26 February to 15 March 2018. Our investigations, including spatiotemporal and sentiment analyses, combined with topic modeling, were conducted to comprehensively understand the trends and identify potential underlying factors in the escalation of the events. After a thorough data pre-processing including the extraction of spatial information from the users’ profile and the translation of non-English tweets, we clustered European countries based on the temporal patterns of tweeting activity in the analysis period and investigated how the sentiments of the tweets and the discussed topics varied over time in these clusters. Using this approach, we found that tweeting activity resonates not only with specific follow-up events, such as the funeral or the resignation of the Prime Minister, but in some cases, also with the political narrative of a given country affecting the course of discussions. Therefore, we argue that Twitter data serves as a unique and useful source of information for the analysis of such civil movements, as the analysis can reveal important patterns in terms of spatiotemporal and sentimental aspects, which may also help to understand protest escalation over space and time.
This study uses novel deep learning-based language models to extract meaningful information from vast chunks of textual data from Twitter on the competing narratives of the recent Syrian immigration to Turkey. Our analysis identifies five main topics in the framing of Syrian immigration in Turkish Twittersphere. In this paper, we demonstrate correlational links between the timing of landmark events and change in the percent share of trends in those topics across time. We highlight two important observations: (a) Social benefit demands of natives on Twitter rose sharply with the COVID-19 pandemic, leading to ever more widespread sentiments of welfare chauvinism and (b) Patriotic feelings and the implementation of an interventionist foreign policy agenda in the immigrants’ country of origin created a relatively tolerant yet patronizing attitude towards migrants. As the COVID-19 pandemic and immigration frequently occupy the center stage in politics of immigrant-hosting societies, our research has international appeal beyond its specific geographical context.
Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9% and 15% of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.
The problem of fake news has gained a lot of attention as it is claimed to have had a significant impact on 2016 US Presidential Elections. Fake news is not a new problem and its spread in social networks is well-studied. Often an underlying assumption in fake news discussion is that it is written to look like real news, fooling the reader who does not check for reliability of the sources or the arguments in its content. Through a unique study of three data sets and features that capture the style and the language of articles, we show that this assumption is not true. Fake news in most cases is more similar to satire than to real news, leading us to conclude that persuasion in fake news is achieved through heuristics rather than the strength of arguments. We show overall title structure and the use of proper nouns in titles are very significant in differentiating fake from real. This leads us to conclude that fake news is targeted for audiences who are not likely to read beyond titles and is aimed at creating mental associations between entities and claims.
We investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation. To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another. We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.
No abstract
Sentiment analysis is a relevant area in the natural language processing context–(NLP) that allows extracting opinions about different topics such as customer service and political elections. Sentiment analysis is usually carried out through supervised learning approaches and using labeled data. However, obtaining such labels is generally expensive or even infeasible. The above problems can be faced by using models based on self-supervised learning, which aims to deal with various machine learning paradigms in the absence of labels. Accordingly, we propose a self-supervised approach for sentiment analysis in Spanish that comprises a lexicon-based method and a supervised classifier. We test our proposal over three corpora; the first two are labeled datasets, namely, CorpusCine and PaperReviews. Further, we use an unlabeled corpus conformed by news related to the Colombian conflict to understand the university journalistic narrative of the war in Colombia. Obtained results demonstrate that our proposal can deal with sentiment analysis settings in scenarios with unlabeled corpus; in fact, it acquires competitive performance compared with state-of-the-art techniques in partially-labeled datasets.
President Bush's initial frame for the attacks of September 11, 2001, overwhelmingly dominated the news.Using that frame as a springboard, this article advances a coherent conception of framing within a new model of the relationship between government and the media in U.S. foreign policy making.The cascading activation model supplements research using the hegemony or indexing approaches.The model explains how interpretive frames activate and spread from the top level of a stratified system (the White House) to the network of nonadministration elites, and on to news organizations, their texts, and the public-and how interpretations feed back from lower to higher levels.To illustrate the model's potential, the article explores the frame challenge mounted by two journalists, Seymour Hersh and Thomas Friedman, who attempted to shift the focus from Afghanistan to Saudi Arabia.As hegemony theory predicts, 9/11 revealed yet again that media patrol the boundaries of culture and keep discord within conventional bounds.But inside those borders, even when government is promoting "war" against terrorism, media are not entirely passive receptacles for government propaganda, and the cascade model illuminates deviations from the preferred frame.As index theorists suggest, elite discord is a necessary condition for politically influential frame challenges.Among other things, the cascade model helps explain whether that condition arises, and how journalists can hinder or advance it.
This book traces how resilience is conceptually grounded in an understanding of the world as interconnected, complex and emergent. In an interconnected world, we are exposed to radical uncertainties, which require new modes of handling them. Security no longer means the promise of protection, but it is redefined as resilience - as security in-formation. Information and the Internet not only play a key role for our understanding of security in highly connected societies, but also for resilience as a new program of tackling emergencies. Social media, cyber-exercises, the collection of digital data and new developments in Internet policy shape resilience as a new form of security governance. Through case studies in these four areas this book documents and critically discusses the relationship between resilience, the Internet and security governance. It takes the reader on a journey from the rise of complexity narratives in the context of security policy to a discussion of the Internet’s influence on resilience practices, and ends with a theory of resilience and the relational. The book shows how the Internet nourishes narratives of connectivity, complexity and emergency in political discourses, and how it brings about new resilience practices. This book will be of much interest to students of resilience studies, Critical Security Studies, Internet-politics, and International Relations in general.
Unmanned and unwomaned aerial vehicles (UAV), or drones, are breaking and creating new boundaries of image-based communication. Using social network analysis and critical discourse analysis, we examine the 60 most popular question threads about drones on Zhihu, China’s largest social question answering platform. We trace how controversial issues around these supposedly novel tech products are mediated, domesticated, visualized, or marginalized via digital representational technology. Supported by Zhihu’s topic categorization algorithm, drone-related discussions form topic clusters. These topic clusters gain currency in the government-regulated cyberspace, where their meanings remain open to widely divergent interpretations and mediation by various agents. We find that the largest drone company DJI occupies a central and strongly interconnected position in the discussions. Drones are, moreover, represented as objects of consumption, technological advancement, national future, and uncertainty. At the same time, the sense-making process of drone-related discussions evokes emerging sets of narrative user identities with potential political effects. Users engage in digital representational technologies publicly and collectively to raise questions and represent their views on new technologies. Therefore, we argue that platforms like Zhihu are essential when studying views of the Chinese citizenry towards technological developments.
Introduction Medical misinformation and conspiracies have thrived during the current infodemic as a result of the volume of information people have been exposed to during the disease outbreak. Given that SARS-CoV-2 (COVID-19) is a novel coronavirus discovered in 2019, much remains unknown about the disease. Moreover, a considerable amount of what was originally thought to be known has turned out to be inaccurate, incomplete, or based on an obsolete knowledge of the virus. It is in this context of uncertainty and confusion that conspiracies flourish. Michael Golebiewski and danah boyd’s work on ‘data voids’ highlights the ways that actors can work quickly to produce conspiratorial content to fill a void. The data void absent of high-quality data surrounding COVID-19 provides a fertile information environment for conspiracies to prosper (Chou et al.). Conspiracism is the belief that society and social institutions are secretly controlled by a powerful group of corrupt elites (Douglas et al.). Michael Barkun’s typology of conspiracy reveals three components: 1) the belief that nothing happens by accident or coincidence; 2) nothing is as it seems: the "appearance of innocence" is to be suspected; 3) the belief that everything is connected through a hidden pattern. At the heart of conspiracy theories is narrative storytelling, in particular plots involving influential elites secretly colluding to control society (Fenster). Conspiracies following this narrative playbook have flourished during the pandemic. Pharmaceutical corporations profiting from national vaccine rollouts, and the emergency powers given to governments around the world to curb the spread of coronavirus, have led some to cast these powerful commercial and State organisations as nefarious actors – 'big evil' drug companies and the ‘Deep State’ – in conspiratorial narratives. Several drugs believed to be potential treatments for COVID-19 have become entangled with conspiracy. At the start of the pandemic scientists experimented with repurposing existing drugs as potential treatments for COVID-19 because safe and effective vaccines were not yet available. A series of antimicrobials with potential activity against SARS-CoV-2 were tested in clinical trials, including lopinavir/ritonavir, favipiravir and remdesivir (Smith et al.). Only hydroxychloroquine and ivermectin transformed from potential COVID treatments into conspiracy objects. This article traces how the hydroxychloroquine and ivermectin conspiracy theories were amplified in the news media and online. It highlights how debunking processes contribute to amplification effects due to audience segmentation in the current media ecology. We conceive of these amplification and debunking processes as key components of a ‘Conspiracy Course’ (Baker and Maddox), identifying the interrelations and tensions between amplification and debunking practices as a conspiracy develops, particularly through mainstream news, social media and alternative media spaces. We do this in order to understand how medical claims about potential treatments for COVID-19 succumb to conspiracism and how we can intervene in their development and dissemination. In this article we present a commentary on how public discourse and actors surrounding two potential treatments for COVID-19: the anti-malarial drug hydroxychloroquine and the anti-parasitic drug ivermectin became embroiled in conspiracy. We examine public discourse and events surrounding these treatments over a 24-month period from January 2020, when the virus gained global attention, to January 2022, the time this article was submitted. Our analysis is contextually informed by an extended digital ethnography into medical misinformation, which has included social media monitoring and observational digital field work of social media sites, news media, and digital media such as blogs, podcasts, and newsletters. Our analysis focusses on the role that public figures and influencers play in amplifying these conspiracies, as well as their amplification by some wellness influencers, referred to as “alt.health influencers” (Baker), and those affiliated with the Intellectual Dark Web, many of whom occupy status in alternative media spaces. The Intellectual Dark Web (IDW) is a term used to describe an alternative influence network comprised of public intellectuals including the Canadian psychologist Jordan Peterson and the British political commentator Douglas Murray. The term was coined by the American mathematician and podcast host Eric Weinstein, who described the IDW as a group opposed to “the gated institutional narrative” of the mainstream media and the political establishment (Kelsey). As a consequence, many associated with the IDW use alternative media, including podcasts and newsletters, as an "eclectic conversational space" where those intellectual thinkers excluded from mainstream conversational spaces in media, politics, and academia can “have a much easier time talking amongst ourselves” (Kelsey). In his analysis of the IDW, Parks describes these figures as "organic" intellectuals who build identification with their audiences by branding themselves as "reasonable thinkers" and reinforcing dominant narratives of polarisation. Hence, while these influential figures are influencers in so far as they cultivate an online audience as a vocation in exchange for social, economic and political gain, they are distinct from earlier forms of micro-celebrity (Senft; Marwick) in that they do not merely achieve fame on social media among a niche community of followers, but appeal to those disillusioned with the mainstream media and politics. The IDW are contrasted not with mainstream celebrities, as is the case with earlier forms of micro-celebrity (Abidin Internet Celebrity), but with the mainstream media and politics. A public figure, on the other hand, is a “famous person” broadcast in the media. While celebrities are public figures, public figures are not necessarily celebrities; a public figure is ‘a person of great public interest or familiarity’, such as a government official, politician, entrepreneur, celebrity, or athlete. Analysis In what follows we explore the role of influencers and public figures in amplifying the hydroxychloroquine and ivermectin conspiracy theories during the pandemic. As part of this analysis, we consider how debunking processes can further amplify these conspiracies, raising important questions about how to most effectively respond to conspiracies in the current media ecology. Discussions around hydroxychloroquine and ivermectin as potential treatments for COVID-19 emerged in early 2020 at the start of the pandemic when people were desperate for a cure, and safe and effective vaccines for the virus were not yet publicly available. While claims concerning the promising effects of both treatments emerged in the mainstream, the drugs remained experimental COVID treatments and had not yet received widespread acceptance among scientific and medical professionals. Much of the hype around these drugs as COVID “cures” emerged from preprints not yet subject to peer review and scientific studies based on unreliable data, which were retracted due to quality issues (Mehra et al.). Public figures, influencers, and news media organisations played a key role in amplifying these narratives in the mainstream, thereby extending the audience reach of these claims. However, their transformation into conspiracy objects followed different amplification processes for each drug. Hydroxychloroquine, the “Game Changer” Hydroxychloroquine gained public attention on 17 March 2020 when the US tech entrepreneur Elon Musk shared a Google Doc with his 40 million followers on Twitter, proposing “maybe worth considering chloroquine for C19”. Musk’s tweet was liked over 50,200 times and received more than 13,500 retweets. The tweet was followed by several other tweets that day in which Musk shared a series of graphs and a paper alluding to the “potential benefit” of hydroxychloroquine in in vitro and early clinical data. Although Musk is not a medical expert, he is a public figure with status and large online following, which contributed to the hype around hydroxychloroquine as a potential treatment for COVID-19. Following Musk’s comments, search interest in chloroquine soared and mainstream media outlets covered his apparent endorsement of the drug. On 19 March 2020, the Fox News programme Tucker Carlson Tonight cited a study declaring hydroxychloroquine to have a “100% cure rate against coronavirus” (Gautret et al.). Within hours another public figure, the then-US President Donald Trump, announced at a White House Coronavirus Task Force briefing that the FDA would fast-track approval of hydroxychloroquine, a drug used to treat malaria and arthritis, which he said had, “tremendous promise based on the results and other tests”. Despite the Chief Medical Advisor to the President, Dr Anthony Fauci, disputing claims concerning the efficacy of hydroxychloroquine as a potential therapy for coronavirus as “anecdotal evidence”, Trump continued to endorse hydroxychloroquine describing the drug as a “game changer”: HYDROXYCHLOROQUINE & AZITHROMYCIN, taken together, have a real chance to be one of the biggest game changers in the history of medicine. He said that the drugs should be put in use IMMEDIATELY. PEOPLE ARE DYING, MOVE FAST, and GOD BLESS EVERYONE! Trump’s tweet was shared over 102,800 times and liked over 384,800 times. His statements correlated with a 2000% increase in prescriptions for the anti-malarial drugs hydroxychloroquine and chloroquine in the US between 15 and 21 March 2020, resulting in many lupus patients unable to source the drug. There were also reports of overdoses as individuals sought to self-medicate with the drug to treat the virus. Once Trump declared himself a proponent of hydroxychloroquine, scientific inquiry into the drug was eclipsed by an overtl
In this paper, we argue that strategic information operations (e.g. disinformation, political propaganda, and other forms of online manipulation) are a critical concern for CSCW researchers, and that the CSCW community can provide vital insight into understanding how these operations function-by examining them as collaborative "work" within online crowds. First, we provide needed definitions and a framework for conceptualizing strategic information operations, highlighting related literatures and noting historical context. Next, we examine three case studies of online information operations using a sociotechnical lens that draws on CSCW theories and methods to account for the mutual shaping of technology, social structure, and human action. Through this lens, we contribute a more nuanced understanding of these operations (beyond "bots" and "trolls") and highlight a persistent challenge for researchers, platform designers, and policy makers-distinguishing between orchestrated, explicitly coordinated, information operations and the emergent, organic behaviors of an online crowd.
本次合并产出的研究框架系统地涵盖了政治传播从微观技术工具到宏观治理叙事的全貌。核心趋势表现为:计算方法(LLM、多模态分析)正深度嵌入政治文本分析;研究重心从单一的舆情监测转向对计算宣传与信息操纵的防御性研究;国家形象叙事更强调跨文化适配与多维视角。同时,报告指出了智能技术在教育转型与社会韧性构建中的关键作用,体现了政治传播研究在计算时代正向着更具预测性、干预性和治理效能的方向发展。