Translation
Transformers
PyTorch
nllb-moe
feature-extraction

Add CO2 emissions to model card

#10
by m-ric HF staff - opened
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -209,6 +209,11 @@ metrics:
209
  - spbleu
210
  - chrf++
211
  inference: false
 
 
 
 
 
212
  ---
213
 
214
  # NLLB-MoE
@@ -216,7 +221,7 @@ inference: false
216
  This is the model card of NLLB-MoE variant.
217
 
218
  - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
219
- - Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022
220
  - License: CC-BY-NC
221
  - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
222
 
 
209
  - spbleu
210
  - chrf++
211
  inference: false
212
+
213
+ co2_eq_emissions:
214
+ emissions: 104_310_000
215
+ source: "No Language Left Behind: Scaling Human-Centered Machine Translation"
216
+ hardware_used: "NVIDIA A100"
217
  ---
218
 
219
  # NLLB-MoE
 
221
  This is the model card of NLLB-MoE variant.
222
 
223
  - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
224
+ - Paper or other resource for more information: [NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation](https://huggingface.co/papers/2207.04672)
225
  - License: CC-BY-NC
226
  - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
227