--- language: de license: apache-2.0 datasets: uonlp/CulturaX --- # mistral7b-de-tokenizer-swap-pure-bf16-v2-anneal-ablation Mistral-7B-v0.1 adapted to German as part of our study on efficient language adaptation: "Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough". Code: https://github.com/konstantinjdobler/tight-budget-llm-adaptation Paper: https://openreview.net/forum?id=VYfJaHeVod ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("konstantindobler/mistral7b-de-tokenizer-swap-pure-bf16-v2-anneal-ablation") model = AutoModelForCausalLM.from_pretrained("konstantindobler/mistral7b-de-tokenizer-swap-pure-bf16-v2-anneal-ablation") # Use model and tokenizer as usual ``` ## Details The model is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and was adapted to German. The original tokenizer was replaced by a language-specific German tokenizer with a vocabulary of 32768 tokens. The new embeddings were initialized with [FOCUS](https://github.com/konstantinjdobler/focus). The model was then trained on 8 billion German tokens from [uonlp/CulturaX](https://huggingface.co/uonlp/CulturaX) with pure bfloat16 precision (no mixed precision). However, in the final annealing phase of the learning rate schedule, the model was again trained using bfloat16 mixed precision. More details and hyperparameters can be found [in the paper](https://openreview.net/forum?id=VYfJaHeVod). ## Disclaimer The web-scale dataset used for pretraining and tokenizer training ([uonlp/CulturaX](https://huggingface.co/uonlp/CulturaX)) might contain personal and sensitive information. Such behavior needs to be assessed carefully before any real-world deployment of the models. ## Citation Please cite as follows: ```bibtex @inproceedings{dobler2024language, title={Language Adaptation on a Tight Academic Compute Budget: Tokenizer Swapping Works and Pure bfloat16 Is Enough}, author={Konstantin Dobler and Gerard de Melo}, booktitle={2nd Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)}, year={2024}, url={https://openreview.net/forum?id=VYfJaHeVod} } ``` ## Acknowledgements The project on which this model is based was funded by the Federal Ministry of Education and Research under the funding code "KI-Servicezentrum Berlin-Brandenburg" 01IS22092. Responsibility for the content of this publication remains with the author.