File size: 3,444 Bytes
0bfaffa 001826c d322b88 0bfaffa 43f6bb2 0bfaffa 43f6bb2 0bfaffa 43f6bb2 0bfaffa 43f6bb2 0bfaffa d322b88 0bfaffa d322b88 cf51e05 d322b88 dea9e1c d322b88 dea9e1c d322b88 cf51e05 0bfaffa d322b88 0bfaffa 27c7dcd 0bfaffa 27c7dcd 0bfaffa dea9e1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
---
license: mit
tags:
- generated_from_trainer
- language-identification
- openvino
datasets:
- fleurs
metrics:
- accuracy
pipeline_tag: text-classification
base_model: facebook/xlm-v-base
model-index:
- name: xlm-v-base-language-id
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: fleurs
type: fleurs
config: all
split: validation
args: all
metrics:
- type: accuracy
value: 0.9930337861372344
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-v-base-language-id
This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the [google/fleurs](https://huggingface.co/datasets/google/fleurs) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0241
- Accuracy: 0.9930
# Usage
The simplest way to use the model is with a text classification pipeline:
```
from transformers import pipeline
model_id = "juliensimon/xlm-v-base-language-id"
p = pipeline("text-classification", model=model_id)
p("Hello world")
# [{'label': 'English', 'score': 0.9802148342132568}]
```
The model is also compatible with [Optimum Intel](https://github.com/huggingface/optimum-intel).
For example, you can optimize it with Intel OpenVINO and enjoy a 2x inference speedup (or more).
```
from optimum.intel.openvino import OVModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
model_id = "juliensimon/xlm-v-base-language-id"
ov_model = OVModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
p = pipeline("text-classification", model=ov_model, tokenizer=tokenizer)
p("Hello world")
# [{'label': 'English', 'score': 0.9802149534225464}]
```
An OpenVINO version of the model is available in the repository.
## Intended uses & limitations
The model can accurately detect 102 languages. You can find the list on the [dataset](https://huggingface.co/datasets/google/fleurs) page.
## Training and evaluation data
The model has been trained and evaluated on the complete google/fleurs training and validation sets.
## Training procedure
The training script is included in the repository. The model has been trained on an p3dn.24xlarge instance on AWS (8 NVIDIA V100 GPUs).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6368 | 1.0 | 531 | 0.4593 | 0.9689 |
| 0.059 | 2.0 | 1062 | 0.0412 | 0.9899 |
| 0.0311 | 3.0 | 1593 | 0.0275 | 0.9918 |
| 0.0255 | 4.0 | 2124 | 0.0243 | 0.9928 |
| 0.017 | 5.0 | 2655 | 0.0241 | 0.9930 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|