orca_alpaca_3b / README.md
Pankaj Mathur
Update README.md
ad636d6
|
raw
history blame
3.7 kB
metadata
license: mit
language:
  - en
library_name: adapter-transformers

alpaca_orca_open_llama: An Open_LLaMA-3B model trained on custom Alpaca dataset using Orca Research paper approaches

Dataset

We train OpenLLaMa-3B model on custom explained tuned Alpaca dataset (~52K) created using approaches from Orca Research Paper.

We leverage all of the 15 system instructions provided in Orca Research Paper to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original Alpaca research paper.

This helps student model aka alpaca_orca_open_llama_3b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).

Please pay attention how the System prompt is added before each instruction in below example usage.

Training

The training configurations are provided in the table below.

The training takes on 4x A600(50G) GPUs and lasts for around 20 Hours for cost of $66 using Lambda Labs

We used DeepSpeed with Zero-3 approaches for parallel gpu training.

batch size 16
train_micro_batch_size_per_gpu 2
gradient_accumulation_steps 2
Learning rate 2e-5
Epochs 3
Max length 1024

Example Usage

Below shows an example on how to use OpenAlpaca

import torch
from transformers import LlamaForCausalLM, LlamaTokenizer

# change model_path between 3b,7b or 13b
model_path = 'psmathur/alpaca_orca_open_llama_3b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float16, device_map='auto',
)
# check more details here https://github.com/openlm-research/open_llama
tokenizer.bos_token_id, tokenizer.eos_token_id = 1,2

# same prompt as provided by Orca Research Paper
system = 'You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.'
instruction = 'Use the given data to calculate the median.'
input = '[7, 3, 8, 2, 10]'

prompt_input = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
#prompt_no_input = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Response:\n"

tokens = tokenizer.encode(prompt_no_input)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')

instance = {'input_ids': tokens,'top_k': 50, 'top_p': 1.0, 'generate_len': 1024}
# instance = {'input_ids': tokens,'top_k': 50, 'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024}

with torch.no_grad():
    rest = model.generate(
            input_ids=tokens, 
            max_length=length+instance['generate_len'], 
            use_cache=True, 
            do_sample=True, 
            top_p=instance['top_p'], 
            top_k=instance['top_k'],
            # temperature=instance['temperature']
        )
        
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
print(f'[!] Response: {string}')

Next Goals:

  1. Try more data, Dolly V2, WizardLM, & Others (we are open for suggestions)
  2. Try bigger OpenLLaMA models 7B and 13B
  3. Try better GPU for training, couldn't get 8xA100 (40GB), I guess they are in hot demand now.
  4. Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
  5. Provide 4bit GGML/GPTQ quantized model (may be TheBloke can help here)