File size: 3,703 Bytes
58be4f8
 
 
 
 
 
ad636d6
55dc209
 
ad636d6
55dc209
ad636d6
4c98e7e
ad636d6
 
 
 
 
 
 
4c98e7e
55dc209
4c98e7e
58be4f8
4c98e7e
55dc209
 
 
 
74d1fbd
58be4f8
 
 
 
 
55dc209
 
 
 
 
 
 
 
 
 
 
74d1fbd
 
55dc209
74d1fbd
 
 
 
 
55dc209
 
74d1fbd
 
 
55dc209
74d1fbd
 
55dc209
 
 
74d1fbd
 
 
 
 
55dc209
 
 
 
 
 
 
74d1fbd
 
55dc209
 
 
 
74d1fbd
 
8889e72
 
 
 
 
 
 
ad636d6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: mit
language:
- en
library_name: adapter-transformers
---
# alpaca_orca_open_llama: An Open_LLaMA-3B model trained on custom Alpaca dataset using Orca Research paper approaches


# Dataset

We train OpenLLaMa-3B model on custom explained tuned Alpaca dataset (~52K) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). 

We leverage all of the 15 system instructions provided in [Orca Research Paper](https://arxiv.org/abs/2306.02707) to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original [Alpaca research paper](https://crfm.stanford.edu/2023/03/13/alpaca.html).

This helps student model aka [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b) to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).

Please pay attention how the **System** prompt is added before each *instruction* in below example usage.

# Training

The training configurations are provided in the table below.

The training takes on 4x A600(50G) GPUs and lasts for around 20 Hours for cost of $66 using [Lambda Labs](https://lambdalabs.com)

We used DeepSpeed with Zero-3 approaches for parallel gpu training.

|||
|:-------------:|:-------------:|
|*batch size*|16|
|*train_micro_batch_size_per_gpu*|2|
|*gradient_accumulation_steps*|2|
|*Learning rate*|2e-5|
|*Epochs*|3|
|*Max length*|1024|



# Example Usage

Below shows an example on how to use OpenAlpaca

```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer

# change model_path between 3b,7b or 13b
model_path = 'psmathur/alpaca_orca_open_llama_3b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float16, device_map='auto',
)
# check more details here https://github.com/openlm-research/open_llama
tokenizer.bos_token_id, tokenizer.eos_token_id = 1,2

# same prompt as provided by Orca Research Paper
system = 'You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.'
instruction = 'Use the given data to calculate the median.'
input = '[7, 3, 8, 2, 10]'

prompt_input = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
#prompt_no_input = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Response:\n"

tokens = tokenizer.encode(prompt_no_input)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')

instance = {'input_ids': tokens,'top_k': 50, 'top_p': 1.0, 'generate_len': 1024}
# instance = {'input_ids': tokens,'top_k': 50, 'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024}

with torch.no_grad():
    rest = model.generate(
            input_ids=tokens, 
            max_length=length+instance['generate_len'], 
            use_cache=True, 
            do_sample=True, 
            top_p=instance['top_p'], 
            top_k=instance['top_k'],
            # temperature=instance['temperature']
        )
        
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
print(f'[!] Response: {string}')

```

Next Goals:
1) Try more data, Dolly V2, WizardLM, & Others (we are open for suggestions)
2) Try bigger OpenLLaMA models 7B and 13B
3) Try better GPU for training, couldn't get 8xA100 (40GB), I guess they are in hot demand now.
4) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
6) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)