erfanzar commited on
Commit
19b65d2
โ€ข
1 Parent(s): a891423

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -20,13 +20,15 @@ pipeline_tag: text-generation
20
 
21
  this model uses Task classification and the conversation is between USER and Answer or AI
22
 
23
- # This model is a finetuned version of Kolla with LGeM data With Respect to them and changes some data and optimizers
24
 
25
- # Model includes pre-trained Weights so its GNU v3.0 licensed as same as Original LLaMA Model
 
 
26
 
27
  # Using Model in Huggingface Transformers
28
 
29
- ## EG
30
 
31
  ```text
32
  CONVERSATION: USER: how can I start to work out more \n
@@ -115,7 +117,7 @@ if __name__ == "__main__":
115
 
116
  ### LGeM ๐Ÿš€
117
 
118
- - what is LGeM, LGeM is a CausalLM Model that is trained on self instruct data (Alpaca data) and for initilization of the first train of main model (weight are available) I used pre weights from Alpaca LoRA (open source)
119
 
120
  - it's Decoder Only
121
  - built-in Pytorch
 
20
 
21
  this model uses Task classification and the conversation is between USER and Answer or AI
22
 
23
+ # NOTE โš ๏ธ
24
 
25
+
26
+ This model is a finetuned version of Kolla with LGeM data With Respect to them and changes some data and optimizers
27
+ The model includes pre-trained Weights so it is GNU v3.0 licensed as same as Original Llama Model
28
 
29
  # Using Model in Huggingface Transformers
30
 
31
+ ## Examples ๐Ÿš€
32
 
33
  ```text
34
  CONVERSATION: USER: how can I start to work out more \n
 
117
 
118
  ### LGeM ๐Ÿš€
119
 
120
+ - what is LGeM, LGeM is a CausalLM Model that is trained on self instruct data (Alpaca data) and for initialization of the first train of the main model (weights are available) I used pre weights from Alpaca LoRA (open source)
121
 
122
  - it's Decoder Only
123
  - built-in Pytorch