Safetensors
mistral
h-j-han commited on
Commit
1b581ab
β€’
1 Parent(s): 67b0d17

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - allenai/MADLAD-400
5
+ language:
6
+ - en
7
+ - ko
8
+ - el
9
+ - ru
10
+ - bg
11
+ base_model:
12
+ - mistralai/Mistral-7B-v0.1
13
+ ---
14
+ VocADT is a solution for vocabulary adaptation using adapter modules that are trained to learn the optimal linear combination of existing embeddings while keeping the model’s weights fixed.
15
+ VocADT offers a flexible and scalable solution without requiring external resources or language constraints.
16
+
17
+
18
+ ## New Vocabulary Adapted Models
19
+ Only the input/output embeddings are replaced, while all other original weights of base model remain fixed.
20
+ These are the merged version: after training the adapters, we merge the original embeddings with the adapter to generate the new embeddings.
21
+ | Name | Adapted Model | Base Model | New Vocab Size | Focused Languages |
22
+ |---|---|---|---|---|
23
+ | VocADT-Latin | [h-j-han/Mistral-7B-VocADT-50k-Latin](https://huggingface.co/h-j-han/Mistral-7B-VocADT-50k-Latin) | [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50k | Swahili (sw), Indonesian (id), Estonian (et), Haitian Creole (ht), English (en)|
24
+ | VocADT-Mixed | [h-j-han/Mistral-7B-VocADT-50k-Mixed](https://huggingface.co/h-j-han/Mistral-7B-VocADT-50k-Mixed) | [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50k | Korean (ko), Greek (el), Russian (ru), Bulgarian (bg), English (en) |
25
+ | VocADT-Cyrillic | [h-j-han/Mistral-7B-VocADT-50k-Cyrillic](https://huggingface.co/h-j-han/Mistral-7B-VocADT-50k-Cyrillic) | [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50k | Russian (ru), Bulgarian (bg), Ukrainian (uk), Kazakh (kk), English (en) |
26
+
27
+
28
+ ## Quick Start
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+
32
+ # model_name = "mistralai/Mistral-7B-v0.1 # Base Model
33
+ model_name = "h-j-han/Mistral-7B-VocADT-50k-Mixed" # Vocabulary Adapted Model
34
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
35
+ model = AutoModelForCausalLM.from_pretrained(model_name)
36
+
37
+ prefix = "\nEnglish: Hello \nKorean: μ•ˆλ…•ν•˜μ„Έμš” \nEnglish: Thank you\nKorean: κ³ λ§™μŠ΅λ‹ˆλ‹€\nEnglish: "
38
+ line = "I lived in Korea for seven years"
39
+ suffix = f"\nKorean:"
40
+ prompt = prefix + line + suffix
41
+
42
+ inputs = tokenizer(prompt, return_tensors="pt")
43
+ outputs = model.generate(**inputs, max_new_tokens=10)
44
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
45
+
46
+ # Base Model Output: "ν•œκ΅­μ— 7λ…„ μ‚΄" # This short incomplete phrase in Korean is 10 tokens for the base model.
47
+ # VocADT Output: "μ €λŠ” ν•œκ΅­μ— 7λ…„ λ™μ•ˆ μ‚΄μ•˜μŠ΅λ‹ˆλ‹€." # Complete and good output within 10 tokens
48
+ ```
49
+
50
+ ## Reference
51
+ We provide code in Github repo : https://github.com/h-j-han/VocADT
52
+ Also, please find details in this paper :
53
+ ```
54
+ @misc{han2024vocadt,
55
+ title={Adapters for Altering LLM Vocabularies: What Languages Benefit the Most?},
56
+ author={HyoJung Han and Akiko Eriguchi and Haoran Xu and Hieu Hoang and Marine Carpuat and Huda Khayrallah},
57
+ year={2024},
58
+ eprint={2410.09644},
59
+ archivePrefix={arXiv},
60
+ primaryClass={cs.CL},
61
+ url={https://arxiv.org/abs/2410.09644},
62
+ }
63
+ ```