Executive Summary : | This project aims to investigate harmfulness within memes to detect, trace, and moderate illicit content. It addresses challenges like abstract obscurity, multimodal amalgamation, contextual dependency, and cross-modal (dis-)association. The project will characterize memetic harmfulness and targeted category detection, model harmfully targeted entity detection, and perform meme contextualization. A large-scale dataset will be collected, analyzing harmfulness cues and categorizing memes by domain, category, and topic. An effective multimodal fusion strategy will be designed, combining localized and global perspectives within memes. The multimodal representations learned will be used for multiclass and multi-label classification. The project will also focus on harmful target detection, segregating the dataset based on harmful vs. non-harmful entities during training and modeling. The final objective is to contextualize memes using a multimodal dataset with manually annotated natural language rationales. A multi-task learning-based framework will be established to optimize related tasks. The project aims to create an automated, comprehensive multimodal meme analysis system for assistive moderation technology. |