dhmeltzer commited on
Commit
4a0222c
1 Parent(s): 4648fc0

Upload model

Browse files
Files changed (2) hide show
  1. README.md +14 -60
  2. adapter_model.safetensors +3 -0
README.md CHANGED
@@ -1,67 +1,21 @@
1
  ---
2
- base_model: meta-llama/Llama-2-13b-hf
3
- tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: Llama-2-13b-hf-eli5-cleaned-wiki65k-1024_qlora
7
- results: []
8
  ---
9
-
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
- # Llama-2-13b-hf-eli5-cleaned-wiki65k-1024_qlora
14
-
15
- This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the None dataset.
16
- It achieves the following results on the evaluation set:
17
- - Loss: 1.3173
18
-
19
- ## Model description
20
-
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
  ## Training procedure
32
 
33
- ### Training hyperparameters
34
-
35
- The following hyperparameters were used during training:
36
- - learning_rate: 0.0002
37
- - train_batch_size: 16
38
- - eval_batch_size: 16
39
- - seed: 42
40
- - gradient_accumulation_steps: 8
41
- - total_train_batch_size: 128
42
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
- - lr_scheduler_type: linear
44
- - lr_scheduler_warmup_ratio: 0.03
45
- - num_epochs: 1
46
-
47
- ### Training results
48
-
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | 1.246 | 0.1 | 82 | 1.3429 |
52
- | 1.7867 | 0.2 | 164 | 1.3370 |
53
- | 1.2111 | 0.3 | 246 | 1.3305 |
54
- | 1.419 | 0.4 | 328 | 1.3258 |
55
- | 1.8005 | 0.51 | 410 | 1.3248 |
56
- | 1.1999 | 0.61 | 492 | 1.3216 |
57
- | 1.4048 | 0.71 | 574 | 1.3197 |
58
- | 1.5675 | 0.81 | 656 | 1.3193 |
59
- | 1.2459 | 0.91 | 738 | 1.3173 |
60
-
61
 
 
 
 
 
 
 
 
 
 
 
 
62
  ### Framework versions
63
 
64
- - Transformers 4.34.0.dev0
65
- - Pytorch 2.0.1+cu118
66
- - Datasets 2.14.5
67
- - Tokenizers 0.13.3
 
1
  ---
2
+ library_name: peft
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - quant_method: bitsandbytes
9
+ - load_in_8bit: False
10
+ - load_in_4bit: True
11
+ - llm_int8_threshold: 6.0
12
+ - llm_int8_skip_modules: None
13
+ - llm_int8_enable_fp32_cpu_offload: False
14
+ - llm_int8_has_fp16_weight: False
15
+ - bnb_4bit_quant_type: nf4
16
+ - bnb_4bit_use_double_quant: True
17
+ - bnb_4bit_compute_dtype: bfloat16
18
  ### Framework versions
19
 
20
+
21
+ - PEFT 0.6.0.dev0
 
 
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1075b7dbf3adcd5a2a4c94639749fb3a3fe3f860ffb19b9c3e2e7f5c4b3932b
3
+ size 1001465824