maximuspowers
commited on
Commit
•
d03fa54
1
Parent(s):
9b244bb
Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,11 @@ co2_eq_emissions:
|
|
29 |
|
30 |
This model is a multimodal classifier that combines text and image inputs to detect potential bias in content. It uses a BERT-based text encoder and a ResNet-34 image encoder, which are fused for classification purposes. A contrastive learning approach was used during training, leveraging CLIP embeddings as guidance to align the text and image representations.
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Model Details
|
35 |
|
|
|
29 |
|
30 |
This model is a multimodal classifier that combines text and image inputs to detect potential bias in content. It uses a BERT-based text encoder and a ResNet-34 image encoder, which are fused for classification purposes. A contrastive learning approach was used during training, leveraging CLIP embeddings as guidance to align the text and image representations.
|
31 |
|
32 |
+
## Resources:
|
33 |
+
|
34 |
+
- This model is based on [FND-CLIP](https://arxiv.org/pdf/2205.14304), proposed by Zhou et al. 2022.
|
35 |
+
- It was trained on the [News Media Bias Plus dataset](https://huggingface.co/datasets/vector-institute/newsmediabias-plus), here are the offical [dataset docs](https://vectorinstitute.github.io/Newsmediabias-plus/).
|
36 |
+
- [Model training ipynb](https://github.com/VectorInstitute/news-media-bias-plus/blob/main/benchmarking/multi-modal-classifiers/baselines-and-notebooks/training-notebooks/slm/fnd-clip-bias-training.ipynb)
|
37 |
|
38 |
## Model Details
|
39 |
|