Chinese
File size: 3,011 Bytes
89287c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c670a7d
89287c0
 
 
 
 
 
56ee0e8
89287c0
 
 
 
c670a7d
 
89287c0
c670a7d
dc50571
89287c0
e593f86
 
 
5f5686b
89287c0
 
 
 
dc50571
 
ad034d6
dc50571
 
ad034d6
 
 
89287c0
 
 
dc50571
89287c0
 
 
 
 
 
 
 
3d78cb9
89287c0
dc50571
89287c0
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
datasets:
- liswei/Taiwan-Text-Excellence-2B
- liswei/PromptPair-TW
- yentinglin/TaiwanChat
base_model: apple/OpenELM
language:
- zh
---

<center>
    <img src="https://huggingface.co/liswei/Taiwan-ELM/resolve/main/Taiwan%20ELM%20Logo.jpeg" alt="Efficient LLM for Taiwan">
</center>

> Efficient LLM for Taiwan with open weights/datasets/checkpoints and affordable sizes (270M/1.1B)

# Taiwan ELM

Taiwan ELM is a family of Efficient LLMs for Taiwan base on [apple/OpenELM](https://huggingface.co/apple/OpenELM).
The project aims to provide an efficient model for researchers without access to large-scale computing resources.

The model is trained using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) on 2B Traditional Chinese tokens and 500K instruction samples.
We will extend the model to train on larger data sets and different base models if there is sufficient demand.

## What is being released?

We release both pre-trained **base models and instruction tuned variants** with 270M and 1.1B parameters.
Along with the model, **datasets used to train the models** are also released.

In an effort to improve transparency, training **checkpoints (including rng/optimizer state) and logs** are also released in the model page.

List of released models:
* [Taiwan-ELM-270M](https://huggingface.co/liswei/Taiwan-ELM-270M)
* [Taiwan-ELM-1_1B](https://huggingface.co/liswei/Taiwan-ELM-1_1B)
* [Taiwan-ELM-270M-Instruct](https://huggingface.co/liswei/Taiwan-ELM-270M-Instruct)
* [Taiwan-ELM-1_1B-Instruct](https://huggingface.co/liswei/Taiwan-ELM-1_1B-Instruct)

List of released datasets:
* [liswei/Taiwan-Text-Excellence-2B](https://huggingface.co/datasets/liswei/Taiwan-Text-Excellence-2B)
* [liswei/PromptPair-TW](https://huggingface.co/datasets/liswei/PromptPair-TW)
* [liswei/wikinews-zhtw-dedup](https://huggingface.co/datasets/liswei/wikinews-zhtw-dedup)
* [liswei/wikipedia-zhtw-dedup](https://huggingface.co/datasets/liswei/wikipedia-zhtw-dedup)
* [liswei/coct-en-zhtw-dedup](https://huggingface.co/datasets/liswei/coct-en-zhtw-dedup)

Some of the datasets are not used for training Taiwan ELM but also released:
* [liswei/common-crawl-zhtw](https://huggingface.co/datasets/liswei/common-crawl-zhtw)
* [liswei/c4-zhtw](https://huggingface.co/datasets/liswei/c4-zhtw)
* [liswei/rm-static-zhTW](https://huggingface.co/datasets/liswei/rm-static-zhTW)

## Usage Examples

For instruction-tuned modesl, we adapt the [LLaMA2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) template:
```jinja2
<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>

{{ user_message }} [/INST]
```

The model could be load via `AutoModelForCausalLM` or `text-generation-inference` with `trust_remote_code=True`:
```python
taiwan_elm_270m = AutoModelForCausalLM.from_pretrained("liswei/Taiwan-ELM-270M", trust_remote_code=True)
```

We also support additional generation methods and speculative generation, please find reference at [OpenELM#usage](https://huggingface.co/apple/OpenELM#usage).