Many recent important events, such as political elections or the coronavirus (COVID-19) outbreak, have been characterized by widespread diffusion of misinformation. How can AI help?

The tutorial will be held on November 19, 2020.

Description

The rise of social media has democratized content creation and has made it easy for anybody to share and to spread information online. On the positive side, this has given rise to citizen journalism, thus enabling much faster dissemination of information compared to what was possible with newspapers, radio, and TV. On the negative side, stripping traditional media from their gate-keeping role has left the public unprotected against the spread of disinformation, which could now travel at breaking-news speed over the same democratic channel. This situation gave rise to the proliferation of false information specifically created to affect individual people's beliefs, and ultimately to influence major events such as political elections; it also set the dawn of the Post-Truth Era, where appeal to emotions has become more important than the truth. More recently, with the emergence of the COVID-19 pandemic, a new blending of medical and political misinformation and disinformation has given rise to the first global infodemic. Limiting the impact of these negative developments has become a major focus for journalists, social media companies, and regulatory authorities.

The tutorial offers an overview of the emerging and inter-connected research areas of fact-checking, misinformation, disinformation, ``fake news'', propaganda, and media bias detection, with focus on text and on computational approaches. It further explores the general fact-checking pipeline and important elements thereof such as check-worthiness estimation, spotting previous fact-checked claims, stance detection, source reliability estimation, and detecting malicious users in social media. Finally, it covers some recent developments such as the emergence of large-scale pre-trained language models, and the challenges and opportunities they offer.

Prior knowledge of natural language processing, machine learning, and deep learning would be needed in order to understand large parts of the contents of this tutorial.

Tutorial Outline

Here is a tentative, yet detailed, structure for the tutorial.

  1. Introduction
    1. What is “fake news”?
    2. “Fake news” as a weapon of mass deception
      • impact of “fake news” in politics, finances, health
      • Does it really work?
      • Can we win the war on “fake news”?
  2. Check-worthiness
    1. Task definition
    2. Datasets
    3. Approaches
      1. ClaimBuster
      2. ClaimRank: modeling the context, multi-source learning, multi-linguality
      3. CLEF shared tasks
  3. Fact-checking
    1. Task definitions
    2. Walk-through example: how humans verify a claim manually
    3. Datasets: Snopes, “Liar, Liar Pants on Fire”, FEVER
    4. Information sources: knowledge bases, Wikipedia, Web, social media
    5. Tasks and approaches
      1. fact-checking against knowledge bases
      2. fact-checking against Wikipedia
      3. fact-checking claims using the Web
      4. fact-checking rumors in social media
      5. fact-checking multi-modal claims, e.g., about images
      6. fact-checking the answers in community question answering forums
    6. Shared tasks at SemEval and FEVER
  4. Fake News Detection
    1. Task definitions and examples
    2. Datasets: FakeNewsNet, NELA-GT-2018, etc.
    3. The language of fake news
    4. Special case: clickbait
    5. Tasks and approaches
      1. neural methods for fake news detection
      2. multi-linguality
  5. Coffee Break [30 mins]
  6. Stance Detection
    1. Task definitions and examples
    2. Datasets
    3. Stance detection as a key element of fact-checking
    4. Information sources: text, social context, user profile
    5. Tasks and approaches
      1. neural methods for stance detection
      2. cross-language stance detection
    6. Shared tasks at SemEval and the Fake News Challenge
  7. Source Reliability and Media Bias Estimation
    1. Task definitions and examples
    2. Datasets: Media Bias Fact/Check, AllSides, OpenSources, etc.
    3. Source reliability as a key element of fact-checking
    4. Special case: hyper-partisanship
    5. Information sources: article text, Wikipedia, social media
    6. Tasks and approaches
      1. neural methods for source reliability estimation
      2. multi-modality
      3. multi-task learning
  8. Propaganda Detection
    1. Task definitions and examples
    2. Propaganda techniques and examples
    3. Datasets
    4. Tasks and approaches
  9. Malicious user detection [10 mins]
    1. Typology of malicious users
    2. How can trolls be stopped?
    3. Datasets
    4. Tasks and approaches
      1. opinion manipulation trolls detection
      2. understanding the role of political trolls
      3. bot detection
  10. Future Challenges [15 mins]
    1. Deep fakes: images, voice, video, text
    2. Text generation: GPT-2, GPT-3, GROVER
    3. Defending against fake news
    4. Fighting the COVID-19 Infodemic

Tutorial Speakers

Preslav Nakov

Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar

Dr. Preslav Nakov is a Principal Scientist at the Qatar Computing Research Institute (QCRI), HBKU. His research interests include computational linguistics, “fake news” detection, fact-checking, machine translation, question answering, sentiment analysis, lexical semantics, Web as a corpus, and biomedical text processing. He received his PhD degree from the University of California at Berkeley (supported by a Fulbright grant), and he was a Research Fellow in the National University of Singapore, a honorary lecturer in the Sofia University, and research staff at the Bulgarian Academy of Sciences. At QCRI, he leads the Tanbih project, developed in collaboration with MIT, which aims to limit the effect of “fake news”, propaganda and media bias by making users aware of what they are reading. Dr. Nakov is President of ACL SIGLEX, Secretary of ACL SIGSLAV, and a member of the EACL advisory board. He is member of the editorial board of TACL, CS&L, NLE, AI Communications, and Frontiers in AI. He is also on the Editorial Board of the Language Science Press Book Series on Phraseology and Multiword Expressions. He co-authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and many research papers in top-tier conferences and journals. He also received the Young Researcher Award at RANLP’2011. Moreover, he was the first to receive the Bulgarian President’s John Atanasoff award, named after the inventor of the first automatic electronic digital computer. Dr. Nakov’s research on “fake news” was featured by over 100 news outlets, including Forbes, Boston Globe, Aljazeera, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED, and Engadget, among others.

Giovanni Da San Martino

Department of Mathematics, University of Padova, Padova, Italy.

Giovanni Da San Martino is a Senior Assistant Professor at the University of Padova, Italy. His research interests are at the intersection of machine learning and natural language processing. He has been researching for 10+ years on these topics, publishing more than 60 publications in top-tier conferences and journals. He has worked on several NLP tasks including paraphrase recognition and stance detection and community question answering. Currently, he is actively involved in research on disinformation and propaganda detection, for which he is co-organiser of the Checkthat! labs at CLEF 2018-2020, the NLP4IF workshops on censorship, disinformation, and propaganda, and of its shared task, the 2019 Hack the News Datathon and the SemEval-2020 task 11 on ``Detection of propaganda techniques in news articles.''

Contact

If you have any question about the tutorial, feel free to send us an email.

 
 
Copyright © 2019 QCRI. All Rights Reserved. Template by pFind Goodies