# bengali-t5-large **bengali-t5-large** is a model trained on the Bengali portion of MT5 dataset. We used the `T5-large` model for this model. [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. The model is trained on around ~11B tokens (64 size batch, 512 tokens, 350k steps). ## Proposal - [Project Proposal](https://discuss.huggingface.co/t/pretrain-t5-from-scratch-in-bengali/7121) ## Participants - [Ibraheem Muhammad Moosa](https://huggingface.co/ibraheemmoosa) - [Tasnim Mohiuddin](https://huggingface.co/tasnim) - [M Saiful Bari](https://huggingface.co/sbmaruf) ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/roberta-base-als-demo)