Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ language:
|
|
9 |
base_model: beomi/Yi-Ko-6B
|
10 |
---
|
11 |
|
12 |
-
# Yi-Ko-6B-Instruct-v1.
|
13 |
|
14 |
## Model Details
|
15 |
|
@@ -20,18 +20,7 @@ base_model: beomi/Yi-Ko-6B
|
|
20 |
1. [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) π
|
21 |
2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) π
|
22 |
3. [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) π
|
23 |
-
4. AIHub λ°μ΄ν°
|
24 |
-
|
25 |
-
## Benchmark Results
|
26 |
-
|
27 |
-
### AI-Harness Evaluation
|
28 |
-
https://github.com/Beomi/ko-lm-evaluation-harness
|
29 |
-
|
30 |
-
| Model | kobest_boolq | kobest_copa | kobest_hellaswag | kobest_sentineg | korunsmile | pawsx_ko |
|
31 |
-
| --- | --- | --- | --- | --- | --- | --- |
|
32 |
-
| | *Zero-shot* ||||||
|
33 |
-
| Yi-Ko-6B-Instruct-v1.1 | | | | | | |
|
34 |
-
| Yi-Ko-6B | 0.7070 | 0.7696 | 0.5009 | 0.4044 | 0.3828 | 0.5145 |
|
35 |
|
36 |
## Instruction Format
|
37 |
```python
|
@@ -47,9 +36,9 @@ https://github.com/Beomi/ko-lm-evaluation-harness
|
|
47 |
import torch
|
48 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
49 |
|
50 |
-
tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.
|
51 |
model = AutoModelForCausalLM.from_pretrained(
|
52 |
-
"wkshin89/Yi-Ko-6B-Instruct-v1.
|
53 |
device_map="auto",
|
54 |
torch_dtype=torch.bfloat16,
|
55 |
)
|
|
|
9 |
base_model: beomi/Yi-Ko-6B
|
10 |
---
|
11 |
|
12 |
+
# Yi-Ko-6B-Instruct-v1.1_
|
13 |
|
14 |
## Model Details
|
15 |
|
|
|
20 |
1. [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) π
|
21 |
2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) π
|
22 |
3. [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) π
|
23 |
+
4. AIHub λ°μ΄ν° νμ©
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
## Instruction Format
|
26 |
```python
|
|
|
36 |
import torch
|
37 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
38 |
|
39 |
+
tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.1_")
|
40 |
model = AutoModelForCausalLM.from_pretrained(
|
41 |
+
"wkshin89/Yi-Ko-6B-Instruct-v1.1_",
|
42 |
device_map="auto",
|
43 |
torch_dtype=torch.bfloat16,
|
44 |
)
|