Update README.md
Browse files
README.md
CHANGED
@@ -54,7 +54,7 @@ Don't expect any third-party UIs/tools to support them yet.
|
|
54 |
I use the following command line; adjust for your tastes and needs:
|
55 |
|
56 |
```
|
57 |
-
./main -t 18 -m gpt4-alpaca-lora-30B.GGML.
|
58 |
### Instruction:
|
59 |
Write a story about llamas
|
60 |
### Response:"
|
@@ -63,6 +63,12 @@ Change `-t 18` to the number of physical CPU cores you have. For example if your
|
|
63 |
|
64 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
# Original GPT4 Alpaca Lora model card
|
67 |
|
68 |
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
|
|
|
54 |
I use the following command line; adjust for your tastes and needs:
|
55 |
|
56 |
```
|
57 |
+
./main -t 18 -m gpt4-alpaca-lora-30B.GGML.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
58 |
### Instruction:
|
59 |
Write a story about llamas
|
60 |
### Response:"
|
|
|
63 |
|
64 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
65 |
|
66 |
+
## How to run in `text-generation-webui`
|
67 |
+
|
68 |
+
Put the desired .bin file in a model directory with `ggml` (case sensitive) in its name.
|
69 |
+
|
70 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
71 |
+
|
72 |
# Original GPT4 Alpaca Lora model card
|
73 |
|
74 |
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
|