Hateful Memes: Phase 2

Detecting hateful content presents a unique challenge in memes, where multiple data modalities need to be analyzed together. Facebook is calling on researchers around the world to help identify which memes contain hate speech. #society

$100,000 in prizes
oct 2020
3,164 joined

Facebook AI

The Hateful Memes data set was compiled by Facebook Artificial Intelligence (AI). Facebook AI seeks to understand and develop systems with human-level intelligence by advancing the longer-term academic problems surrounding AI. Their research covers theory, algorithms, applications, software infrastructure, and hardware infrastructure across areas including computer vision, conversational AI, integrity, natural language processing, ranking and recommendations, systems research, theory, speech & audio, human & machine intelligence, and more.

There are many reasons why a hateful meme can be hard to spot, even for human experts. Because meme data is multimodal—each sample in the hateful memes data consists of text and image information—one modality may appear non-hateful while another is clearly hateful (unimodal hate). For example, non-hateful text may overlay a hateful image. If the image were changed, the hateful meme would become non-hateful. The Facebook AI team that built the Hateful Memes challenge data set calls this category of cases benign confounders. It's the presence of benign confounders that forces successful hateful meme classification systems to be multimodal.

You can learn more here: