update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,10 @@ The quantization was carried out in a custom branch of [autoawq](https://github.
|
|
21 |
|
22 |
It worked using [vllm](https://github.com/vllm-project/vllm). It may not work with other frameworks as they have not been tested.
|
23 |
|
|
|
|
|
|
|
|
|
24 |
# KULLM3
|
25 |
Introducing KULLM3, a model with advanced instruction-following and fluent chat abilities.
|
26 |
It has shown remarkable performance in instruction-following, speficially by closely following gpt-3.5-turbo.
|
@@ -95,7 +99,7 @@ _ = model.generate(inputs, streamer=streamer, max_new_tokens=1024)
|
|
95 |
|
96 |
### Results
|
97 |
|
98 |
-
<img src="kullm3_instruction_evaluation.png" width=100%>
|
99 |
|
100 |
|
101 |
## Citation
|
|
|
21 |
|
22 |
It worked using [vllm](https://github.com/vllm-project/vllm). It may not work with other frameworks as they have not been tested.
|
23 |
|
24 |
+
Below is the README.md for the original model.
|
25 |
+
|
26 |
+
---
|
27 |
+
|
28 |
# KULLM3
|
29 |
Introducing KULLM3, a model with advanced instruction-following and fluent chat abilities.
|
30 |
It has shown remarkable performance in instruction-following, speficially by closely following gpt-3.5-turbo.
|
|
|
99 |
|
100 |
### Results
|
101 |
|
102 |
+
<img src="https://huggingface.co/nlpai-lab/KULLM3/resolve/main/kullm3_instruction_evaluation.png" width=100%>
|
103 |
|
104 |
|
105 |
## Citation
|