April 16, 2021
Journal Article

Machine Intelligence to Detect, Characterise, and Defend against Influence Operations in the Information Environment

Abstract

Social media has enabled a new era of manipulation in the information and cognitive domains. Deceptive content—misleading, falsified, and fabricated—is routinely created and spread in the modern social media environment with the intent to create confusion and widen political and social divides, and exploit the societal conflict exacerbated by these divides in the real-world (aka physical domain). Such disinformation campaigns demonstrate a threat to the integrity of economic, political, cultural, public health, and national security institutions around the world. In this work we overview our artificial intelligence (AI) capabilities to detect, describe, and defend against information operations on Twitter as an example social platform to understand the influence of misleading and falsified content diffusion and better enable those charged with defending against such manipulation to enable responsive parties to counter it. We first present novel linguistically-informed deep learning (DL) models for misinformation and disinformation detection, and present an in-depth linguistic analysis of psycho-linguistic markers across broad deception categories. We then demonstrate how our models perform in the multilingual and multimodal setting and categorize falsified and misleading content based on the intent to deceive. We also provide a large-scale analysis to describe user behavior and spread patterns while engaging with deceptive content and report novel findings about the immediate diffusion of deceptive content by characterizing the vulnerable sub-populations and their demographics, and explicitly measuring speed and scale of deception spread to uncover who shares deceptive content, how quickly, how much, and how evenly. In addition, we measure audience reactions to misinformation and disinformation at scale, distinguishing the reactions of users identified as bots versus humans. Finally, we take advantage of deep translation and generation models to create unique solutions for real-time defense against digital deception and discuss how to apply causal inference to prescribe and intervene into strategic communications jointly across information, cognitive, and physical domains.

Published: April 16, 2021

Citation

Volkova S., M.F. Glenski, E.M. Ayton, E.G. Saldanha, J.A. Mendoza, D.L. Arendt, and Z.H. Shaw, et al. 2021. Machine Intelligence to Detect, Characterise, and Defend against Influence Operations in the Information Environment. Journal of Information Warfare 20, no. 2:42-66. PNNL-SA-154713.