julian-schelb
commited on
Commit
•
b18e680
1
Parent(s):
7df89e1
Update README.md
Browse files
README.md
CHANGED
@@ -20,11 +20,22 @@ datasets:
|
|
20 |
- wikiann
|
21 |
---
|
22 |
|
23 |
-
#
|
24 |
|
25 |
## Model description
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
#### Limitations and bias
|
|
|
28 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
29 |
|
30 |
## Training data
|
@@ -33,6 +44,8 @@ This model is limited by its training dataset of entity-annotated news articles
|
|
33 |
|
34 |
## Usage
|
35 |
|
|
|
|
|
36 |
```python
|
37 |
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
38 |
|
@@ -57,4 +70,10 @@ predicted_token_class_ids = logits.argmax(-1)
|
|
57 |
# Multiple token classes might account for the same word
|
58 |
predicted_tokens_classes = [model_tuned.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
|
59 |
predicted_tokens_classes
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
```
|
|
|
20 |
- wikiann
|
21 |
---
|
22 |
|
23 |
+
# RoBERTa for Multilingual Named Entity Recognition
|
24 |
|
25 |
## Model description
|
26 |
|
27 |
+
## About RoBERTa
|
28 |
+
|
29 |
+
This model is a fine-tuned version of [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large). The original model was pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
|
30 |
+
|
31 |
+
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
|
32 |
+
|
33 |
+
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
|
34 |
+
|
35 |
+
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
|
36 |
+
|
37 |
#### Limitations and bias
|
38 |
+
|
39 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
40 |
|
41 |
## Training data
|
|
|
44 |
|
45 |
## Usage
|
46 |
|
47 |
+
You can use this model by using the AutoTokenize and AutoModelForTokenClassification class:
|
48 |
+
|
49 |
```python
|
50 |
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
51 |
|
|
|
70 |
# Multiple token classes might account for the same word
|
71 |
predicted_tokens_classes = [model_tuned.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
|
72 |
predicted_tokens_classes
|
73 |
+
```
|
74 |
+
|
75 |
+
### BibTeX entry and citation info
|
76 |
+
|
77 |
+
```bibtex
|
78 |
+
TBD
|
79 |
```
|