Devarui379
commited on
Commit
•
ed302c8
1
Parent(s):
b860692
Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,11 @@ library_name: transformers
|
|
4 |
tags:
|
5 |
- unsloth
|
6 |
- llama-cpp
|
7 |
-
-
|
8 |
---
|
9 |
|
10 |
# Devarui379/Reflection-Llama-3.1-8B-Q8_0-GGUF
|
11 |
-
This model was converted to GGUF format
|
12 |
Refer to the [original model card](https://huggingface.co/terrycraddock/Reflection-Llama-3.1-8B) for more details on the model.
|
13 |
|
14 |
## Use with llama.cpp
|
@@ -49,4 +49,4 @@ Step 3: Run inference through the main binary.
|
|
49 |
or
|
50 |
```
|
51 |
./llama-server --hf-repo Devarui379/Reflection-Llama-3.1-8B-Q8_0-GGUF --hf-file reflection-llama-3.1-8b-q8_0.gguf -c 2048
|
52 |
-
```
|
|
|
4 |
tags:
|
5 |
- unsloth
|
6 |
- llama-cpp
|
7 |
+
- text-generation-inference
|
8 |
---
|
9 |
|
10 |
# Devarui379/Reflection-Llama-3.1-8B-Q8_0-GGUF
|
11 |
+
This model was converted to GGUF format
|
12 |
Refer to the [original model card](https://huggingface.co/terrycraddock/Reflection-Llama-3.1-8B) for more details on the model.
|
13 |
|
14 |
## Use with llama.cpp
|
|
|
49 |
or
|
50 |
```
|
51 |
./llama-server --hf-repo Devarui379/Reflection-Llama-3.1-8B-Q8_0-GGUF --hf-file reflection-llama-3.1-8b-q8_0.gguf -c 2048
|
52 |
+
```
|