Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
label
class label
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
End of preview.

Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection Datasets

This repository contains the datasets used in the paper Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection.

Project Page | Code

Abstract

Recent advances in Large Multimodal Models (LMMs) have shown promise in hateful meme detection, but face challenges like sub-optimal performance and limited out-of-domain generalization. This work proposes a robust adaptation framework for hateful meme detection that enhances in-domain accuracy and cross-domain generalization while preserving the general vision-language capabilities of LMMs. Our approach achieves improved robustness under adversarial attacks compared to supervised fine-tuning (SFT) models and state-of-the-art performance on six meme classification datasets, outperforming larger agentic systems. Additionally, our method generates higher-quality rationales for explaining hateful content, enhancing model interpretability.

Dataset Preparation

The datasets consist of image data and corresponding annotation data.

Image data

Copy images into ./data/image/dataset_name/All folder. For example: ./data/image/FB/All/12345.png, ./data/image/HarMeme/All, ./data/image/Propaganda/All, etc..

Annotation data

Copy jsonl annotation file into ./data/gt/dataset_name folder.

Sample Usage

To generate CLIP embeddings for the datasets prior to training, you can use the provided script as follows:

python3 src/utils/generate_CLIP_embedding_HF.py --dataset "FB"
python3 src/utils/generate_CLIP_embedding_HF.py --dataset "HarMeme"

Similarly, to generate ALIGN embeddings:

python3 src/utils/generate_ALIGN_embedding_HF.py --dataset "FB"
python3 src/utils/generate_ALIGN_embedding_HF.py --dataset "HarMeme"

Citation

If our work helped your research, please kindly cite our papers:

@inproceedings{RGCL2024Mei,
    title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning",
    author = "Mei, Jingbiao  and
      Chen, Jinghong  and
      Lin, Weizhe  and
      Byrne, Bill  and
      Tomalin, Marcus",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.291",
    doi = "10.18653/v1/2024.acl-long.291",
    pages = "5333--5347"
}

 @article{RAHMD2025Mei, title={Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection},
          url={http://arxiv.org/abs/2502.13061},
          DOI={10.48550/arXiv.2502.13061},
          note={arXiv:2502.13061 [cs]},
          number={arXiv:2502.13061},
          publisher={arXiv},
          author={Mei, Jingbiao and Chen, Jinghong and Yang, Guangyu and Lin, Weizhe and Byrne, Bill},
          year={2025},
          month=may }
Downloads last month
162

Collection including Jingbiao/RA-HMD