Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,41 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
---
|
4 |
+
|
5 |
+
# Model Card for Model ID
|
6 |
+
|
7 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
8 |
+
|
9 |
+
**dragon-yi-answer-tool** is a quantized version of DRAGON Yi 6B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
|
10 |
+
|
11 |
+
[**dragon-yi-6b**](https://huggingface.co/llmware/dragon-yi-6b-v0) is a fact-based question-answering model, optimized for complex business documents.
|
12 |
+
|
13 |
+
To pull the model via API:
|
14 |
+
|
15 |
+
from huggingface_hub import snapshot_download
|
16 |
+
snapshot_download("llmware/dragon-yi-answer-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
|
17 |
+
|
18 |
+
|
19 |
+
Load in your favorite GGUF inference engine, or try with llmware as follows:
|
20 |
+
|
21 |
+
from llmware.models import ModelCatalog
|
22 |
+
model = ModelCatalog().load_model("dragon-yi-answer-tool")
|
23 |
+
response = model.inference(query, add_context=text_sample)
|
24 |
+
|
25 |
+
Note: please review [**config.json**](https://huggingface.co/llmware/dragon-yi-answer-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
|
26 |
+
|
27 |
+
|
28 |
+
### Model Description
|
29 |
+
|
30 |
+
<!-- Provide a longer summary of what this model is. -->
|
31 |
+
|
32 |
+
- **Developed by:** llmware
|
33 |
+
- **Model type:** GGUF
|
34 |
+
- **Language(s) (NLP):** English
|
35 |
+
- **License:** Yi Community License
|
36 |
+
- **Quantized from model:** [llmware/dragon-yi](https://huggingface.co/llmware/dragon-yi-6b-v0/)
|
37 |
+
|
38 |
+
|
39 |
+
## Model Card Contact
|
40 |
+
|
41 |
+
Darren Oberst & llmware team
|