Text Generation
Transformers
Safetensors
PyTorch
mistral
Safetensors
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.2
HuggingFaceH4/zephyr-7b-beta
Generated from Trainer
en
dataset:HuggingFaceH4/ultrachat_200k
dataset:HuggingFaceH4/ultrafeedback_binarized
arxiv:2305.18290
arxiv:2310.16944
Eval Results
Inference Endpoints
has_space
conversational
Update README.md
#2
by
MaziyarPanahi
- opened
README.md
CHANGED
@@ -35,6 +35,11 @@ zephyr-7b-beta-Mistral-7B-Instruct-v0.2 is a merge of the following models:
|
|
35 |
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
36 |
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
37 |
|
|
|
|
|
|
|
|
|
|
|
38 |
## 🧩 Configuration
|
39 |
|
40 |
```yaml
|
@@ -56,7 +61,6 @@ parameters:
|
|
56 |
dtype: bfloat16
|
57 |
```
|
58 |
|
59 |
-
|
60 |
## 💻 Usage
|
61 |
|
62 |
|
|
|
35 |
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
36 |
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
37 |
|
38 |
+
## Repositories available
|
39 |
+
|
40 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.2-GPTQ)
|
41 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/MaziyarPanahi/zephyr-7b-beta-Mistral-7B-Instruct-v0.2-GGUF)
|
42 |
+
|
43 |
## 🧩 Configuration
|
44 |
|
45 |
```yaml
|
|
|
61 |
dtype: bfloat16
|
62 |
```
|
63 |
|
|
|
64 |
## 💻 Usage
|
65 |
|
66 |
|