martinaianaro99 commited on
Commit
3125bc0
1 Parent(s): ff338e8

Model save

Browse files
README.md CHANGED
@@ -1,57 +1,57 @@
1
  ---
2
  base_model: llava-hf/llava-1.5-7b-hf
3
- library_name: peft
4
- license: llama2
5
  tags:
 
6
  - trl
7
  - sft
8
- - generated_from_trainer
9
- model-index:
10
- - name: llava-1.5-7b
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # llava-1.5-7b
18
-
19
- This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset.
20
 
21
- ## Model description
 
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
 
26
 
27
- More information needed
 
 
 
 
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
32
 
33
- ## Training procedure
34
 
35
- ### Training hyperparameters
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 1.4e-05
39
- - train_batch_size: 1
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
- - lr_scheduler_type: linear
44
- - num_epochs: 1
45
- - mixed_precision_training: Native AMP
46
 
47
- ### Training results
 
 
 
 
48
 
 
49
 
50
 
51
- ### Framework versions
52
 
53
- - PEFT 0.13.2
54
- - Transformers 4.46.1
55
- - Pytorch 2.5.0+cu121
56
- - Datasets 3.0.2
57
- - Tokenizers 0.20.1
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: llava-hf/llava-1.5-7b-hf
3
+ library_name: transformers
4
+ model_name: llava-1.5-7b
5
  tags:
6
+ - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
 
10
  ---
11
 
12
+ # Model Card for llava-1.5-7b
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="martinaianaro99/llava-1.5-7b", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
+ ## Training procedure
29
 
 
30
 
 
31
 
32
+ This model was trained with SFT.
33
 
34
+ ### Framework versions
 
 
 
 
 
 
 
 
35
 
36
+ - TRL: 0.12.1
37
+ - Transformers: 4.46.2
38
+ - Pytorch: 2.5.1+cu121
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.20.3
41
 
42
+ ## Citations
43
 
44
 
 
45
 
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
+ ```
runs/Nov18_11-32-23_c183c767e4c7/events.out.tfevents.1731929569.c183c767e4c7.1539.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c07bfaead9a6606e28c1f7c61d6be28c91e8b546351243edc920cdf0b472d40
3
- size 15468
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc98175fda3cf699ca6cc415480f44f2d7e81cbc9ff6af0324eac16bfc50dad9
3
+ size 15822