Hateful Memes: Phase 1

Detecting hateful content presents a unique challenge in memes, where multiple data modalities need to be analyzed together. Facebook is calling on researchers around the world to help identify which memes contain hate speech. #society

Development arena
Completed apr 2021
3,929 joined

Problem description

Your goal is to predict whether a meme is hateful or non-hateful. This is a binary classification problem with multimodal input data consisting of the the meme image itself (the image mode) and a string representing the text in the meme image (the text mode).

Given a meme id, meme image file, and a string representing the text in the meme image, your trained model should output the probability that the meme is hateful.

Competition timeline and leaderboard


You are in the Phase 1 portion of this competition. Phase 1 is the research stage of the competition. In this phase, you can make 1 submission per day on the seen test set. The best scores for each team in **Phase 2** will determine final prize rankings. If you're new to the challenge, this is a great place to get start. If you are ready, head on over to Hateful Memes: Phase 2 to compete for the cash prize.

During Phase 2 of the competition, you may make up to 3 submissions of your predictions for the provided new test_unseen set, your Phase 1 submissions are for the test_seen set. Your Phase 2 scores will determine the final leaderboard, not your Phase 1 scores!. We will send out an announcement to signed-up participants at the beginning of Phase 2.

Team joining deadline: No new teams will be allowed in Phase 2. If you would like to form a team, you must do so during Phase 1 prior to the start of the Phase 2 competition.

Phase 2 will run from 12:00 am UTC on October 1, 2020 to 11:59 pm UTC on October 31, 2020.


Develop models using train and dev sets (Phase 1)
Score against public test set (Phase 1)
Final evaluation against unseen test set (Phase 2)
Data exploration and model building Feedback from public test set Final evaluation against unseen test set
Participants can get access to the research dataset to explore the data and start developing their models. Submissions may be made to the public leaderboard. These scores will not determine final rankings for prizes. Participants will then have the opportunity to make three submissions against a new, unseen test set. Performance against this dataset will be used to determine prizes.


The features in this data set


The features in this data set are the meme images themselves and string representations of the text in the image—you do not need to apply your own OCR algorithm to extract the meme text! The meme images and text extractions live in separate files, we'll show you how they can can be matched to each other using the meme id.

Memes

The Hateful Memes data set consists of three files and one directory.

The directory is called img and it contains all of the meme images to which you'll have access: train, dev, and test. The images are named <id>.png, where <id> is a unique 5 digit number. For example, if you had the img directory in data/raw, then listing the first few examples would give

$ ls data/raw/img | head -n 5
01235.png
01236.png
01243.png
01245.png
01247.png

The five files are JSON lines, or, .jsonl files.

  • train.jsonl
  • dev_seen.jsonl
  • dev_unseen.jsonl
  • test_seen.jsonl
  • test_unseen.jsonl

The "seen" and "unseen" suffixes give you the separation between Phase 1 and Phase 2. The Phase 1 leaderboard uses the test_seen labels, and its submission format file can be found on the Phase 1 data download page. The Phase 2 leaderboard uses the test_unseen labels, and its submission can be found on the Phase 2 data download page.

Each line in a .jsonl file is valid JSON. For example, if train.jsonl is in data/raw we can view the first few lines of JSON using

$ head -n 5 data/raw/train.jsonl 
{"id":42953,"img":"img\/42953.png","label":0,"text":"its their character not their color that matters"}
{"id":23058,"img":"img\/23058.png","label":0,"text":"don't be afraid to love again everyone is not like your ex"}
{"id":13894,"img":"img\/13894.png","label":0,"text":"putting bows on your pet"}
{"id":37408,"img":"img\/37408.png","label":0,"text":"i love everything and everybody! except for squirrels i hate squirrels"}
{"id":82403,"img":"img\/82403.png","label":0,"text":"everybody loves chocolate chip cookies, even hitler"}

For the most part, each line of JSON corresponds to one and only one meme in the img directory. A meme referenced in train.jsonl belongs to the training set, whereas dev_seen.jsonl / dev_unseen.jsonl memes are intended to be used for model validation; there is some overlap between the validation sets. As described above, the test_seen.jsonl memes will be used to generate your submissions to the Phase 1 competition leaderboard, whereas the test_unseen.jsonl memes will be used to generate your submissions to the Phase 2 competition leaderboard.

All .jsonl files provide the following fields

  • id. The unique identifier between the img directory and the .jsonl files, e.g., "id": 13894.
  • img. The actual meme filename, e.g., "img": img/13894.png, note that the filename contains the img directory described above, and that the filename stem is the id.
  • text. The raw text string embedded in the meme image, e.g., img/13894.png has "text": "putting bows on your pet"

Additionally, if the meme belongs to train.jsonl, dev_seen.jsonl, or dev_unseen.jsonl it will contain the additional field giving the label

  • label where 1 -> "hateful" and 0 -> "non-hateful"

Of course, you are not provided labels for the memes referenced in test.jsonl.

Tip. You can load these .jsonl files into a Pandas DataFrame using pd.read_json(filepath, lines=True). The lines=True argument tells pandas this is a .jsonl data structure. For example, loading train.jsonl and looking at the head gives

id img label text
0 42953 img/42953.png 0 its their character not their color that matters
1 23058 img/23058.png 0 don't be afraid to love again everyone is not like your ex
2 13894 img/13894.png 0 putting bows on your pet
3 37408 img/37408.png 0 i love everything and everybody! except for squirrels i hate squirrels
4 82403 img/82403.png 0 everybody loves chocolate chip cookies, even hitler

To load the data using Python's native json library, you can use a list comprehension like data = [json.loads(line) for line in open(‘train.jsonl’).read().splitlines()].

Performance metric


Model performance and leaderboard rankings will be determined using the AUC ROC, or, the Area Under the Curve of the Receiver Operating Characteristic. The metric measures how well your binary classifier discriminates between the classes as its decision threshold is varied. This means you'll need to submit probabilities for each each prediction.

In Python, you can calculate AUC ROC using sklearn.metrics.roc_auc_score. We use the default macro averaging strategy. For more on the AUC ROC metric, check out this post.

In this competition, we also calculate the accuracy of your predictions, given by the ratio of correct predictions to the total number of predictions made. So you'll need to submit the binary label for each of your predictions as well as the probabilities. As described above, rankings will be determined based on your best submissions by ROC AUC; the accuracy scores of those submissions are provided for additional information only, since they are more easily interpretable.

Phase 1 Submission format


Submissions are required to have three columns,

  • Meme identification number, id
  • Probability that the meme is hateful, proba (must be a float)
  • Binary label that the meme is hateful (1) or non-hateful (0), label (must be an int)

Your submission must have columns with exactly these names. Further the id values in your Phase 1 submission must be exactly the id values in the Phase 1 submission format, which you can download on the Phase 1 data download page

The proba column will be used to score AUC ROC, which will determine your ranking on the leaderboard. The label column will be used to determine accuracy, which will be displayed on the leaderboard soon, but does not impact your ranking.

For example, if you predicted...
id proba label
0 16395 0.4 0
1 37405 0.4 0
2 94180 0.4 0
3 54321 0.4 0
4 97015 0.4 0

The .csv file that you submit would look like:

id,proba,label
16395,0.4,0
37405,0.4,0
94180,0.4,0
54321,0.4,0
97015,0.4,0

Good luck!


Good luck and enjoy this problem! If you have any questions you can always visit the user forum!


NO PURCHASE NECESSARY TO ENTER/WIN. A PURCHASE WILL NOT INCREASE YOUR CHANCES OF WINNING. The Competition consists of two (2) Phases, with winners determined based upon Submissions using the Phase II dataset. The start and end dates and times for each Phase will be set forth on this Competition Website. Open to legal residents of the Territory, 18+ & age of majority. "Territory" means any country, state, or province where the laws of the US or local law do not prohibit participating or receiving a prize in the Challenge and excludes any area or country designated by the United States Treasury's Office of Foreign Assets Control (e.g. Cuba, Sudan, Crimea, Iran, North Korea, Syria, Venezuela). Any Participant use of External Data must be pursuant to a valid license. Void outside the Territory and where prohibited by law. Participation subject to official Competition Rules. Prizes: $50,000 USD (1st), $25,000 (2nd), $10,000 USD (3rd), $8,000 USD (4th), $7,000 USD (5th). See Official Rules and Competition Website for submission requirements, evaluation metrics and full details. Sponsor: Facebook, Inc., 1 Hacker Way, Menlo Park, CA 94025 USA.