sbmaruf commited on
Commit
d036995
1 Parent(s): 2911198

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -26,7 +26,7 @@ The model is trained on around ~11B tokens (64 size batch, 512 tokens, 350k step
26
  >>> model = FlaxT5ForConditionalGeneration.from_pretrained("flax-community/bengali-t5-base", config=config)
27
  ```
28
 
29
- The model is trained on `de-noising` objectives followed by the script [here](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py). Currently This model doesn't have any generation capability. If you want this model to have generation capability, please do a finetuning on `prefix-LM` objective mentioned in the [paper](https://arxiv.org/abs/1910.10683).
30
  Please note that we haven't finetuned the model in any downstream task. If you are finetuning the model in any downstream task, please let us know about it. Shoot us an email (sbmaruf at gmail dot com)
31
 
32
  ## Proposal
 
26
  >>> model = FlaxT5ForConditionalGeneration.from_pretrained("flax-community/bengali-t5-base", config=config)
27
  ```
28
 
29
+ The model is trained on `de-noising` objectives followed by the script [here](https://huggingface.co/flax-community/bengali-t5-base/blob/main/run_t5_mlm_flax.py) and [here](https://huggingface.co/flax-community/bengali-t5-base/blob/main/run.sh). Currently This model doesn't have any generation capability. If you want this model to have generation capability, please do a finetuning on `prefix-LM` objective mentioned in the [paper](https://arxiv.org/abs/1910.10683).
30
  Please note that we haven't finetuned the model in any downstream task. If you are finetuning the model in any downstream task, please let us know about it. Shoot us an email (sbmaruf at gmail dot com)
31
 
32
  ## Proposal