manojkumarvohra
commited on
Commit
•
0572177
1
Parent(s):
42c1ca7
Update README.md
Browse files
README.md
CHANGED
@@ -1,12 +1,11 @@
|
|
|
|
|
|
|
|
1 |
# LLAMA2 7B Guanaco Pico Adapter
|
2 |
-
|
3 |
This is a 8Bit Quantized adapter over llama2-7b-chat-hf checkpoint.
|
4 |
To use the merged version of this model refer: manojkumarvohra/llama2-7B-Chat-hf-8bit-guanaco-pico-finetuned [https://huggingface.co/manojkumarvohra/llama2-7B-Chat-hf-8bit-guanaco-pico-finetuned]
|
5 |
This is only meant for learning purpose and is not recommended to be used for any business purpose.
|
6 |
|
7 |
-
---
|
8 |
-
library_name: peft
|
9 |
-
---
|
10 |
## Training procedure
|
11 |
|
12 |
|
@@ -35,4 +34,4 @@ The following `bitsandbytes` quantization config was used during training:
|
|
35 |
|
36 |
- PEFT 0.4.0
|
37 |
|
38 |
-
- PEFT 0.4.0
|
|
|
1 |
+
---
|
2 |
+
library_name: peft
|
3 |
+
---
|
4 |
# LLAMA2 7B Guanaco Pico Adapter
|
|
|
5 |
This is a 8Bit Quantized adapter over llama2-7b-chat-hf checkpoint.
|
6 |
To use the merged version of this model refer: manojkumarvohra/llama2-7B-Chat-hf-8bit-guanaco-pico-finetuned [https://huggingface.co/manojkumarvohra/llama2-7B-Chat-hf-8bit-guanaco-pico-finetuned]
|
7 |
This is only meant for learning purpose and is not recommended to be used for any business purpose.
|
8 |
|
|
|
|
|
|
|
9 |
## Training procedure
|
10 |
|
11 |
|
|
|
34 |
|
35 |
- PEFT 0.4.0
|
36 |
|
37 |
+
- PEFT 0.4.0
|