Text Generation
Transformers
PyTorch
English
llama
Eval Results
text-generation-inference
Inference Endpoints
Pankaj Mathur commited on
Commit
7511d16
1 Parent(s): b02da34

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -18
README.md CHANGED
@@ -5,16 +5,16 @@ language:
5
  library_name: adapter-transformers
6
  ---
7
  # Wizardlm Alpaca Dolly Orca Open_LLaMa_13b
8
- An Open_LLaMA-13B model trained on custom explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying [Orca Research Paper](https://arxiv.org/abs/2306.02707) dataset construction approaches.
9
 
10
 
11
  # Dataset
12
 
13
- We trained [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) on custom explain tuned [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html) (~52K) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
14
 
15
- We leverage all of the 15 system instructions provided in [Orca Research Paper](https://arxiv.org/abs/2306.02707) to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original [Alpaca research paper](https://crfm.stanford.edu/2023/03/13/alpaca.html).
16
 
17
- This helps student model aka [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b) to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
18
 
19
  Please see below example usage how the **System** prompt is added before each *instruction*.
20
 
@@ -22,7 +22,7 @@ Please see below example usage how the **System** prompt is added before each *i
22
 
23
  The training configurations are provided in the table below.
24
 
25
- The training takes on 4x A600(50G) GPUs and lasts for around 20 Hours for cost of $66 using [Lambda Labs](https://lambdalabs.com)
26
 
27
  We used DeepSpeed with Zero-3 approaches for parallel gpu training by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)
28
 
@@ -32,7 +32,7 @@ Here are some of params used during training:
32
  |:-------------:|:-------------:|
33
  |*batch_size*|16|
34
  |*train_micro_batch_size_per_gpu*|2|
35
- |*gradient_accumulation_steps*|2|
36
  |*Learning rate*|2e-5|
37
  |*Max length*|1024|
38
  |*Epochs*|3|
@@ -41,14 +41,14 @@ Here are some of params used during training:
41
 
42
  # Example Usage
43
 
44
- Below shows an example on how to use [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b)
45
 
46
  ```python
47
  import torch
48
  from transformers import LlamaForCausalLM, LlamaTokenizer
49
 
50
- # change model_path between 3b,7b or 13b
51
- model_path = 'psmathur/alpaca_orca_open_llama_3b'
52
  tokenizer = LlamaTokenizer.from_pretrained(model_path)
53
  model = LlamaForCausalLM.from_pretrained(
54
  model_path, torch_dtype=torch.float16, device_map='auto',
@@ -94,24 +94,23 @@ generate_text(system, instruction, input)
94
  **P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at [email protected]**
95
 
96
  Next Goals:
97
- 1) Try more data, Dolly V2, WizardLM, & Others (we are open for suggestions)
98
- 2) Try bigger OpenLLaMA models 7B and 13B
99
- 3) Try better GPU for training, couldn't get 8xA100 (40GB), I guess they are in hot demand now.
100
- 4) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
101
- 6) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
102
 
103
 
104
  Reference:
105
- If you found [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b) useful in your research or applications, please kindly cite using the following BibTeX:
106
 
107
  ```
108
- @misc{alpaca_orca_open_llama_3b,
109
  author = {Pankaj Mathur},
110
- title = {alpaca_orca_open_llama_3b: A custom explain tuned Alpaca Model Based On OpenLLaMA},
111
  year = {2023},
112
  publisher = {GitHub, HuggingFace},
113
  journal = {GitHub repository, HuggingFace repository},
114
- howpublished = {\url{https://github.com/pankajarm/alpaca_orca_open_llama_3b}, \url{https://https://huggingface.co/psmathur/alpaca_orca_open_llama_3b}},
115
  }
116
  ```
117
  ```
 
5
  library_name: adapter-transformers
6
  ---
7
  # Wizardlm Alpaca Dolly Orca Open_LLaMa_13b
8
+ An Open_LLaMA-13B model trained on custom explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
9
 
10
 
11
  # Dataset
12
 
13
+ We trained [OpenLLaMa-13B model](https://github.com/openlm-research/open_llama) on custom explain tuned [WizardLM ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
14
 
15
+ We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
16
 
17
+ This helps student model aka [wizardlm_alpaca_dolly_orca_open_llama_13b](https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b) to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
18
 
19
  Please see below example usage how the **System** prompt is added before each *instruction*.
20
 
 
22
 
23
  The training configurations are provided in the table below.
24
 
25
+ The training takes on 8x A100(80G) GPUs and lasts for around 15 Hours for cost of $180 using [Lambda Labs](https://lambdalabs.com)
26
 
27
  We used DeepSpeed with Zero-3 approaches for parallel gpu training by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)
28
 
 
32
  |:-------------:|:-------------:|
33
  |*batch_size*|16|
34
  |*train_micro_batch_size_per_gpu*|2|
35
+ |*gradient_accumulation_steps*|1|
36
  |*Learning rate*|2e-5|
37
  |*Max length*|1024|
38
  |*Epochs*|3|
 
41
 
42
  # Example Usage
43
 
44
+ Below shows an example on how to use this model
45
 
46
  ```python
47
  import torch
48
  from transformers import LlamaForCausalLM, LlamaTokenizer
49
 
50
+ # Hugging Face model_path
51
+ model_path = 'psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b'
52
  tokenizer = LlamaTokenizer.from_pretrained(model_path)
53
  model = LlamaForCausalLM.from_pretrained(
54
  model_path, torch_dtype=torch.float16, device_map='auto',
 
94
  **P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at [email protected]**
95
 
96
  Next Goals:
97
+ 1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
98
+ 2) Try smaller OpenLLaMA models 7B and 3B
99
+ 3) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
100
+ 4) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
 
101
 
102
 
103
  Reference:
104
+ If you found wizardlm_alpaca_dolly_orca_open_llama_13b useful in your research or applications, please kindly cite using the following BibTeX:
105
 
106
  ```
107
+ @misc{wizardlm_alpaca_dolly_orca_open_llama_13b,
108
  author = {Pankaj Mathur},
109
+ title = {wizardlm_alpaca_dolly_orca_open_llama_13b: An explain tuned OpenLLaMA-13b model on custom wizardlm, alpaca, & dolly datasets},
110
  year = {2023},
111
  publisher = {GitHub, HuggingFace},
112
  journal = {GitHub repository, HuggingFace repository},
113
+ howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_13b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b}},
114
  }
115
  ```
116
  ```