Social media plays a valuable role in rapid news dissemination, but it also serves as a vehicle to propagate unverified information. For example, news shared on Facebook or Twitter may actually contain disinformation, propaganda, hoaxes, conspiracies, clickbait or satire. This paper presents an in-depth analysis of the behavior of suspicious news classification models including error analysis and prediction confidence. We consider five deep learning architectures that leverage combinations of text, linguistic and image input signals from tweets. The behavior of these models is analyzed across four suspicious news prediction tasks. Our findings include that models leveraging only the text of tweets outperform those leveraging only the image (by 3-13% absolute in F-measure), and that models that combine image and text signals with linguistic cues e.g., biased and subjective language markers can, but do not always, perform even better. Finally, our main contribution is a series of analyses, in which we characterize text and image traits of our classes of suspicious news and analyze patterns of errors made by the various models to inform the design of future deceptive news prediction models.
Revised: July 8, 2019 |
Published: July 6, 2019
Volkova S., E.M. Ayton, D.L. Arendt, Z. Huang, and B.J. Hutchinson. 2019.Explaining Multimodal Deceptive News Prediction Models. In Proceedings of the Thirteenth International AAAI Conference on Web and Social Media (ICWSM 2019), June 11-14, 2019, Munich, Germany, 659-662. Menlo Park, California:Association for the Advancement of Artificial Intelligence.PNNL-SA-135457.