bprice9 commited on
Commit
e420aae
1 Parent(s): 3819ab8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -24,7 +24,7 @@ The original model performance on biomedical benchmarks is 85.87%.
24
  - **Model Optimizations:**
25
  - **Weight quantization:** FP8
26
  - **Activation quantization:** FP8
27
- - **Intended Use Cases:** Palmyra-Med-70B is intended for non-commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
28
  - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
29
  - **License(s):** [writer-open-model-license](https://writer.com/legal/open-model-license/)
30
 
@@ -47,7 +47,7 @@ This model can be deployed using the [vLLM](https://docs.vllm.ai/en/latest/) lib
47
  from vllm import LLM, SamplingParams
48
  from transformers import AutoTokenizer
49
 
50
- model_id = "bprice9/Palmyra-Med-70B-FP8"
51
  number_gpus = 2
52
 
53
  sampling_params = SamplingParams(temperature=0.0, top_p=0.9, max_tokens=512, stop_token_ids=[128001, 128009])
@@ -157,7 +157,7 @@ oneshot(
157
  </td>
158
  <td style="width: 20%;"><strong>Palmyra-Med-70B (Original FP16)</strong>
159
  </td>
160
- <td style="width: 20%;"><strong>Palmyra-Med-70B-FP8 (This Model)</strong>
161
  </td>
162
  </tr>
163
  <tr>
 
24
  - **Model Optimizations:**
25
  - **Weight quantization:** FP8
26
  - **Activation quantization:** FP8
27
+ - **Intended Use Cases:** Palmyra-Medical-70B-FP8 is intended for non-commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
28
  - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
29
  - **License(s):** [writer-open-model-license](https://writer.com/legal/open-model-license/)
30
 
 
47
  from vllm import LLM, SamplingParams
48
  from transformers import AutoTokenizer
49
 
50
+ model_id = "bprice9/Palmyra-Medical-70B-FP8"
51
  number_gpus = 2
52
 
53
  sampling_params = SamplingParams(temperature=0.0, top_p=0.9, max_tokens=512, stop_token_ids=[128001, 128009])
 
157
  </td>
158
  <td style="width: 20%;"><strong>Palmyra-Med-70B (Original FP16)</strong>
159
  </td>
160
+ <td style="width: 20%;"><strong>Palmyra-Medical-70B-FP8 (This Model)</strong>
161
  </td>
162
  </tr>
163
  <tr>