doberst commited on
Commit
3a970eb
1 Parent(s): 73a587f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,28 +1,28 @@
1
  ---
2
- license: other
 
3
  ---
4
 
5
  # Model Card for Model ID
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- **dragon-yi-answer-tool** is a quantized version of DRAGON Yi 6B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
10
 
11
- [**dragon-yi-6b**](https://huggingface.co/llmware/dragon-yi-6b-v0) is a fact-based question-answering model, optimized for complex business documents.
12
 
13
  To pull the model via API:
14
 
15
  from huggingface_hub import snapshot_download
16
- snapshot_download("llmware/dragon-yi-answer-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
17
 
18
 
19
  Load in your favorite GGUF inference engine, or try with llmware as follows:
20
 
21
  from llmware.models import ModelCatalog
22
- model = ModelCatalog().load_model("dragon-yi-answer-tool")
23
  response = model.inference(query, add_context=text_sample)
24
 
25
- Note: please review [**config.json**](https://huggingface.co/llmware/dragon-yi-answer-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
26
 
27
 
28
  ### Model Description
@@ -32,8 +32,8 @@ Note: please review [**config.json**](https://huggingface.co/llmware/dragon-yi-a
32
  - **Developed by:** llmware
33
  - **Model type:** GGUF
34
  - **Language(s) (NLP):** English
35
- - **License:** Yi Community License
36
- - **Quantized from model:** [llmware/dragon-yi](https://huggingface.co/llmware/dragon-yi-6b-v0/)
37
 
38
 
39
  ## Model Card Contact
 
1
  ---
2
+ license: apache-2.0
3
+ inference: false
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
+ **dragon-qwen-7b-gguf** is a quantized version of a fact-based question answering model, optimized for complex business documents, fine-tuned on top of Qwen 7B base, and then packaged with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
11
 
 
12
 
13
  To pull the model via API:
14
 
15
  from huggingface_hub import snapshot_download
16
+ snapshot_download("llmware/dragon-qwen-7b-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
17
 
18
 
19
  Load in your favorite GGUF inference engine, or try with llmware as follows:
20
 
21
  from llmware.models import ModelCatalog
22
+ model = ModelCatalog().load_model("dragon-qwen-7b-gguf")
23
  response = model.inference(query, add_context=text_sample)
24
 
25
+ Note: please review [**config.json**](https://huggingface.co/llmware/dragon-qwen-7b-gguf/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
26
 
27
 
28
  ### Model Description
 
32
  - **Developed by:** llmware
33
  - **Model type:** GGUF
34
  - **Language(s) (NLP):** English
35
+ - **License:** Apache 2.0
36
+ - **Quantized from model:** llmware/dragon-qwen
37
 
38
 
39
  ## Model Card Contact