TheBloke commited on
Commit
dc3498e
1 Parent(s): 7ae8bb5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md CHANGED
@@ -1,3 +1,90 @@
1
  ---
2
  license: other
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  ---
4
+ # Koala: A Dialogue Model for Academic Research
5
+ This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model.
6
+
7
+ This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
8
+
9
+ ## Other Koala repos
10
+
11
+ These other versions are also available:
12
+ * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
13
+ * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
14
+ * [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
15
+
16
+ ## Quantization method
17
+
18
+ This GPTQ model was quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) with the following command:
19
+ ```
20
+ python3 llama.py /content/koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-13B-4bit-128g.pt
21
+ ```
22
+
23
+ ## How to run with text-generation-webui
24
+
25
+ The model files provided will not load as-is with [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
26
+
27
+ They require the latest version of the GPTQ code.
28
+
29
+ Here are the commands I used to clone GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
30
+ ```
31
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
32
+ git clone https://github.com/oobabooga/text-generation-webui
33
+ mkdir -p text-generation-webui/repositories
34
+ ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
35
+ ```
36
+
37
+ Then install this model into `text-generation-webui/models` and run text-generation-webui as follows:
38
+ ```
39
+ cd text-generation-webui
40
+ python server.py --model koala-13B-4bit-128g --wbits 4 --groupsize 128 --model_type Llama
41
+ ```
42
+
43
+ ## Coming soon
44
+
45
+ Tomorrow I will upload a `safetensors` file as well.
46
+
47
+ ## How to merge Koala delta weights
48
+
49
+ The Koala delta weights were originally merged using the following commands, producing [koala-13B-HF](https://huggingface.co/TheBloke/koala-13B-HF):
50
+ ```
51
+ git clone https://github.com/young-geng/EasyLM
52
+
53
+ git clone https://huggingface.co/TheBloke/llama-13b
54
+
55
+ mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2
56
+
57
+ cd EasyLM
58
+
59
+ PYTHON_PATH="${PWD}:$PYTHONPATH" python \
60
+ -m EasyLM.models.llama.convert_torch_to_easylm \
61
+ --checkpoint_dir=/content/llama-13b \
62
+ --output_file=/content/llama-13b-LM \
63
+ --streaming=True
64
+
65
+ PYTHON_PATH="${PWD}:$PYTHONPATH" python \
66
+ -m EasyLM.scripts.diff_checkpoint --recover_diff=True \
67
+ --load_base_checkpoint='params::/content/llama-13b-LM' \
68
+ --load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \
69
+ --output_file=/content/koala_13b.diff.weights \
70
+ --streaming=True
71
+
72
+ PYTHON_PATH="${PWD}:$PYTHONPATH" python \
73
+ -m EasyLM.models.llama.convert_easylm_to_hf --model_size=7b \
74
+ --output_dir=/content/koala-13B-HF \
75
+ --load_checkpoint='params::/content/koala_13b.diff.weights' \
76
+ --tokenizer_path=/content/llama-13b/tokenizer.model
77
+ ```
78
+
79
+ Check out the following links to learn more about the Berkeley Koala model.
80
+ * [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
81
+ * [Online demo](https://koala.lmsys.org/)
82
+ * [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM)
83
+ * [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md)
84
+
85
+ ## License
86
+ The model weights are intended for academic research only, subject to the
87
+ [model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md),
88
+ [Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
89
+ and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
90
+ Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.