Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
41 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML)
|
42 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b)
|
43 |
|
44 |
-
## Prompt template:
|
45 |
|
46 |
```
|
47 |
### System:
|
@@ -50,7 +50,7 @@ You are an AI assistant that follows instruction extremely well. Help as much as
|
|
50 |
### User:
|
51 |
prompt
|
52 |
|
53 |
-
### Response
|
54 |
```
|
55 |
or
|
56 |
```
|
@@ -60,10 +60,10 @@ You are an AI assistant that follows instruction extremely well. Help as much as
|
|
60 |
### User:
|
61 |
prompt
|
62 |
|
63 |
-
### Input
|
64 |
input
|
65 |
|
66 |
-
### Response
|
67 |
```
|
68 |
|
69 |
<!-- compatibility_ggml start -->
|
|
|
41 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML)
|
42 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b)
|
43 |
|
44 |
+
## Prompt template:
|
45 |
|
46 |
```
|
47 |
### System:
|
|
|
50 |
### User:
|
51 |
prompt
|
52 |
|
53 |
+
### Response:
|
54 |
```
|
55 |
or
|
56 |
```
|
|
|
60 |
### User:
|
61 |
prompt
|
62 |
|
63 |
+
### Input:
|
64 |
input
|
65 |
|
66 |
+
### Response:
|
67 |
```
|
68 |
|
69 |
<!-- compatibility_ggml start -->
|