Spaces:
No application file
No application file
shreyasmeher
commited on
Commit
•
65f0d36
1
Parent(s):
ddfcbd9
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,48 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
-
emoji:
|
4 |
-
colorFrom:
|
5 |
-
colorTo:
|
6 |
-
sdk:
|
7 |
-
pinned:
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: ConfliBERT
|
3 |
+
emoji: 🏛️
|
4 |
+
colorFrom: red
|
5 |
+
colorTo: indigo
|
6 |
+
sdk: streamlit
|
7 |
+
pinned: true
|
8 |
---
|
9 |
|
10 |
+
## Model Name
|
11 |
+
ConfliBERT
|
12 |
+
|
13 |
+
## Developers
|
14 |
+
Yibo Hu, MohammadSaleh Hosseini, Erick Skorupa Parolin, Javier Osorio, Latifur Khan, Patrick Brandt, Vito D’Orazio
|
15 |
+
|
16 |
+
## Released
|
17 |
+
2022, NAACL 2022 conference
|
18 |
+
|
19 |
+
## Repository
|
20 |
+
[GitHub Repository](https://github.com/snowood1/ConfliBERT)
|
21 |
+
|
22 |
+
## Paper
|
23 |
+
[ConfliBERT: A Pre-trained Language Model for Political Conflict and Violence](https://aclanthology.org/2022.naacl-main.395/)
|
24 |
+
|
25 |
+
## Model Description
|
26 |
+
ConfliBERT is a transformer model pretrained on a vast corpus of texts related to political conflict and violence. This model is based on the BERT architecture and is specialized for analyzing texts within its domain, using masked language modeling (MLM) and next sentence prediction (NSP) as its main pretraining objectives. It is designed to improve performance in tasks like sentiment analysis, event extraction, and entity recognition for texts dealing with political subjects.
|
27 |
+
|
28 |
+
## Model Variations
|
29 |
+
- **ConfliBERT-scr-uncased**: Pretrained from scratch with a custom uncased vocabulary.
|
30 |
+
- **ConfliBERT-scr-cased**: Pretrained from scratch with a custom cased vocabulary.
|
31 |
+
- **ConfliBERT-cont-uncased**: Continual pretraining from BERT's original uncased vocabulary.
|
32 |
+
- **ConfliBERT-cont-cased**: Continual pretraining from BERT's original cased vocabulary.
|
33 |
+
|
34 |
+
## Intended Uses & Limitations
|
35 |
+
ConfliBERT is intended for use in tasks related to its training domain (political conflict and violence). It can be used for masked language modeling or next sentence prediction and is particularly useful when fine-tuned on downstream tasks such as classification or information extraction in political contexts.
|
36 |
+
|
37 |
+
## How to Use
|
38 |
+
ConfliBERT can be loaded and used directly with pipelines for masked language modeling or integrated into custom applications for more specific tasks:
|
39 |
+
```python
|
40 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
41 |
+
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained("snowood1/ConfliBERT-scr-uncased", use_auth_token=True)
|
43 |
+
model = AutoModelForMaskedLM.from_pretrained("snowood1/ConfliBERT-scr-uncased", use_auth_token=True)
|
44 |
+
|
45 |
+
# Example of usage
|
46 |
+
text = "The government of [MASK] was overthrown in a coup."
|
47 |
+
input_ids = tokenizer.encode(text, return_tensors='pt')
|
48 |
+
outputs = model(input_ids)
|