Edit model card

This model is a multilingual sentence-level media bias classifier.

It is a version of mediabiasgrouup/magpie-pt-xlm, fine-tuned for a media bias classification. It has been pre-trained on LBM (Large Bias Mixture) collection of 59 tasks and then fine-tuned on the mediabiasgrouup/BABE dataset.


Citation

The code for the training is available at: https://github.com/Media-Bias-Group/magpie-multi-task The paper is avalable at: https://aclanthology.org/2024.lrec-main.952/

If you use this model, please cite the following paper(s):

@inproceedings{horych-etal-2024-magpie,
    title = "{MAGPIE}: Multi-Task Analysis of Media-Bias Generalization with Pre-Trained Identification of Expressions",
    author = "Horych, Tom{\'a}{\v{s}}  and
      Wessel, Martin Paul  and
      Wahle, Jan Philip  and
      Ruas, Terry  and
      Wa{\ss}muth, Jerome  and
      Greiner-Petter, Andr{\'e}  and
      Aizawa, Akiko  and
      Gipp, Bela  and
      Spinde, Timo",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lrec-main.952",
    pages = "10903--10920",
    abstract = "Media bias detection poses a complex, multifaceted problem traditionally tackled using single-task models and small in-domain datasets, consequently lacking generalizability. To address this, we introduce MAGPIE, a large-scale multi-task pre-training approach explicitly tailored for media bias detection. To enable large-scale pre-training, we construct Large Bias Mixture (LBM), a compilation of 59 bias-related tasks. MAGPIE outperforms previous approaches in media bias detection on the Bias Annotation By Experts (BABE) dataset, with a relative improvement of 3.3{\%} F1-score. Furthermore, using a RoBERTa encoder, we show that MAGPIE needs only 15{\%} of fine-tuning steps compared to single-task approaches. We provide insight into task learning interference and show that sentiment analysis and emotion detection help learning of all other tasks, and scaling the number of tasks leads to the best results. MAGPIE confirms that MTL is a promising approach for addressing media bias detection, enhancing the accuracy and efficiency of existing models. Furthermore, LBM is the first available resource collection focused on media bias MTL.",
}
Downloads last month
26
Safetensors
Model size
278M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mediabiasgroup/magpie-babe-ft-xlm

Finetuned
(3)
this model

Dataset used to train mediabiasgroup/magpie-babe-ft-xlm