Quantization made by Richard Erkhov.
Leia-Swallow-13b - GGUF
- Model creator: https://huggingface.co/leia-llm/
- Original model: https://huggingface.co/leia-llm/Leia-Swallow-13b/
Name | Quant method | Size |
---|---|---|
Leia-Swallow-13b.Q2_K.gguf | Q2_K | 4.58GB |
Leia-Swallow-13b.IQ3_XS.gguf | IQ3_XS | 5.06GB |
Leia-Swallow-13b.IQ3_S.gguf | IQ3_S | 5.34GB |
Leia-Swallow-13b.Q3_K_S.gguf | Q3_K_S | 5.34GB |
Leia-Swallow-13b.IQ3_M.gguf | IQ3_M | 5.64GB |
Leia-Swallow-13b.Q3_K.gguf | Q3_K | 5.97GB |
Leia-Swallow-13b.Q3_K_M.gguf | Q3_K_M | 5.97GB |
Leia-Swallow-13b.Q3_K_L.gguf | Q3_K_L | 6.52GB |
Leia-Swallow-13b.IQ4_XS.gguf | IQ4_XS | 6.61GB |
Leia-Swallow-13b.Q4_0.gguf | Q4_0 | 6.93GB |
Leia-Swallow-13b.IQ4_NL.gguf | IQ4_NL | 6.98GB |
Leia-Swallow-13b.Q4_K_S.gguf | Q4_K_S | 6.99GB |
Leia-Swallow-13b.Q4_K.gguf | Q4_K | 7.4GB |
Leia-Swallow-13b.Q4_K_M.gguf | Q4_K_M | 7.4GB |
Leia-Swallow-13b.Q4_1.gguf | Q4_1 | 7.69GB |
Leia-Swallow-13b.Q5_0.gguf | Q5_0 | 8.44GB |
Leia-Swallow-13b.Q5_K_S.gguf | Q5_K_S | 8.44GB |
Leia-Swallow-13b.Q5_K.gguf | Q5_K | 8.68GB |
Leia-Swallow-13b.Q5_K_M.gguf | Q5_K_M | 8.68GB |
Leia-Swallow-13b.Q5_1.gguf | Q5_1 | 9.19GB |
Leia-Swallow-13b.Q6_K.gguf | Q6_K | 10.03GB |
Leia-Swallow-13b.Q8_0.gguf | Q8_0 | 12.99GB |
Original model description:
license: apache-2.0 language: - ja
Leia-Swallow-13B
LEIA is a training technique for autoregressive LLMs that effectively improves their performance in languages other than English by enhancing cross-lingual knowledge transfer from English to a target language. This model is constructed by applying LEIA to Swallow, a Japanese-English bilingual LLM based on LLaMA 2. The model achieves enhanced performance on four out of six Japanese question answering benchmarks and equivalent performance on the remaining two, as reported below.
Please refer to our paper or blog post (in Japanese) for further technical details.
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation (arxiv.org)
- LEIA: 言語間転移学習でLLMを賢くする新しい方法 (zenn.dev)
Model List
Empirical Results
The model is assessed using the following six question answering benchmarks:
- X-CODAH
- X-CSQA
- JCommonsenseQA
- NIILC
- JEMHopQA
- JAQKET v2
Model | X-CODAH | X-CSQA | JCommonsenseQA | NIILC | JEMHopQA | JAQKET v2 |
---|---|---|---|---|---|---|
Swallow | 43.3 | 41.8 | 89.3 | 64.1 | 50.6 | 88.9 |
LEIA | 44.0 | 41.9 | 89.3 | 65.8 | 50.6 | 89.6 |
For further details of this experiment, please refer to our paper.
Contributors
- Ikuya Yamada (Studio Ousia, RIKEN)
- Ryokan Ri (LY Corporation, SB Intuitions)
- Downloads last month
- 622