davzoku commited on
Commit
d1cc2b9
1 Parent(s): 6864fa8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -1
README.md CHANGED
@@ -1,7 +1,47 @@
1
  ---
 
 
 
 
 
 
 
2
  library_name: peft
 
 
3
  ---
4
- ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
 
7
  The following `bitsandbytes` quantization config was used during training:
@@ -14,7 +54,49 @@ The following `bitsandbytes` quantization config was used during training:
14
  - bnb_4bit_quant_type: nf4
15
  - bnb_4bit_use_double_quant: False
16
  - bnb_4bit_compute_dtype: float16
 
17
  ### Framework versions
18
 
19
 
20
  - PEFT 0.4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
+ language: en
4
+ license: llama2
5
+ model_type: llama
6
+ datasets:
7
+ - mlabonne/CodeLlama-2-20k
8
+ pipeline_tag: text-generation
9
  library_name: peft
10
+ tags:
11
+ - llama-2
12
  ---
13
+
14
+ # CRIA v1.3
15
+
16
+ 💡 [Article](https://walterteng.com/cria) |
17
+ 💻 [Github](https://github.com/davzoku/cria) |
18
+ 📔 Colab [1](https://colab.research.google.com/drive/1rYTs3qWJerrYwihf1j0f00cnzzcpAfYe),[2](https://colab.research.google.com/drive/1Wjs2I1VHjs6zT_GE42iEXsLtYh6VqiJU)
19
+
20
+ ## What is CRIA?
21
+
22
+ > krē-ə plural crias. : a baby llama, alpaca, vicuña, or guanaco.
23
+
24
+ <p align="center">
25
+ <img src="https://raw.githubusercontent.com/davzoku/cria/main/assets/icon-512x512.png" width="300" height="300" alt="Cria Logo"> <br>
26
+ <i>or what ChatGPT suggests, <b>"Crafting a Rapid prototype of an Intelligent llm App using open source resources"</b>.</i>
27
+ </p>
28
+
29
+ The initial objective of the CRIA project is to develop a comprehensive end-to-end chatbot system, starting from the instruction-tuning of a large language model and extending to its deployment on the web using frameworks such as Next.js.
30
+ Specifically, we have fine-tuned the `llama-2-7b-chat-hf` model with QLoRA (4-bit precision) using the [mlabonne/CodeLlama-2-20k](https://huggingface.co/datasets/mlabonne/CodeLlama-2-20k) dataset. This fine-tuned model serves as the backbone for the [CRIA chat](https://chat.walterteng.com) platform.
31
+
32
+ ## 📦 Model Release
33
+
34
+ CRIA v1.3 comes with several variants.
35
+
36
+ - [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3): Merged Model
37
+ - [davzoku/cria-llama2-7b-v1.3-GGML](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGML): Quantized Merged Model
38
+ - [davzoku/cria-llama2-7b-v1.3_peft](https://huggingface.co/davzoku/cria-llama2-7b-v1.3_peft): PEFT adapter
39
+
40
+ ## 🔧 Training
41
+
42
+ It was trained on a Google Colab notebook with a T4 GPU and high RAM.
43
+
44
+ ### Training procedure
45
 
46
 
47
  The following `bitsandbytes` quantization config was used during training:
 
54
  - bnb_4bit_quant_type: nf4
55
  - bnb_4bit_use_double_quant: False
56
  - bnb_4bit_compute_dtype: float16
57
+
58
  ### Framework versions
59
 
60
 
61
  - PEFT 0.4.0
62
+
63
+
64
+ ## 💻 Usage
65
+
66
+ ```python
67
+ # pip install transformers accelerate
68
+
69
+ from transformers import AutoTokenizer
70
+ import transformers
71
+ import torch
72
+
73
+ model = "davzoku/cria-llama2-7b-v1.3"
74
+ prompt = "What is a cria?"
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained(model)
77
+ pipeline = transformers.pipeline(
78
+ "text-generation",
79
+ model=model,
80
+ torch_dtype=torch.float16,
81
+ device_map="auto",
82
+ )
83
+
84
+ sequences = pipeline(
85
+ f'<s>[INST] {prompt} [/INST]',
86
+ do_sample=True,
87
+ top_k=10,
88
+ num_return_sequences=1,
89
+ eos_token_id=tokenizer.eos_token_id,
90
+ max_length=200,
91
+ )
92
+ for seq in sequences:
93
+ print(f"Result: {seq['generated_text']}")
94
+ ```
95
+
96
+ ## References
97
+
98
+ We'd like to thank:
99
+
100
+ - [mlabonne](https://huggingface.co/mlabonne) for his article and resources on implementation of instruction tuning
101
+ - [TheBloke](https://huggingface.co/TheBloke) for his script for LLM quantization.
102
+