Edit model card

Llama 2 7B Instruction Generator

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.

philschmid/llama-7b-instruction-generator is an fine-tuned version of llama 2 7B to generate instruction on a given input. The model was fined tuned using the Aplaca format and a modified version of dolly. Below you can find an example.

### Instruction:
Use the Input below to create an instruction, which could have been used to generate the input using an LLM. 

### Input:
Dear [boss name],

I'm writing to request next week, August 1st through August 4th,
off as paid time off.

I have some personal matters to attend to that week that require 
me to be out of the office. I wanted to give you as much advance 
notice as possible so you can plan accordingly while I am away.

Please let me know if you need any additional information from me 
or have any concerns with me taking next week off. I appreciate you 
considering this request.

Thank you, [Your name]

### Response:
Write an email to my boss that I need next week 08/01 - 08/04 off.

Everything after ### Response will be generated by the model.

The idea of the model was to be able to synthetically generate instruction data from unsupervised data, like emails to personalize LLMs.

Model Date

July 25, 2023

How to use the model

import torch
from transformers import AutoTokenizer, AutoModelModelForCausalLM

# load base LLM model and tokenizer
model = AutoModelModelForCausalLM.from_pretrained(
    "philschmid/llama-2-7b-instruction-generator",
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16,
    load_in_4bit=True,
) 
tokenizer = AutoTokenizer.from_pretrained("philschmid/llama-2-7b-instruction-generator")

prompt = f"""### Instruction:
Use the Input below to create an instruction, which could have been used to generate the input using an LLM. 

### Input:
Dear [boss name],

I'm writing to request next week, August 1st through August 4th,
off as paid time off.

I have some personal matters to attend to that week that require 
me to be out of the office. I wanted to give you as much advance 
notice as possible so you can plan accordingly while I am away.

Please let me know if you need any additional information from me 
or have any concerns with me taking next week off. I appreciate you 
considering this request.

Thank you, [Your name]

### Response:
"""

input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
outputs = model.generate(input_ids=input_ids, max_new_tokens=100, do_sample=True, top_p=0.9,temperature=0.9)

print(f"Generated instruction:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):]}")

Evaluated example

Prompt:
Plastic is made from oil, natural gas and even plant oils during refining of these oils into other products like gasoline.  Ethane and propane are created when treated with heat during a refinery process called cracking.  This turns the Ethane and propane into ethylene and propylene which are used with other chemical ingredients to create polymers that are the base of what plastic is made out of.

Generated instruction:
Given this paragraph, where does plastic come from?

Ground truth:
How is plastic made?

Planning to experiment with bigger sizes and starting from the chat models.

Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using philschmid/llama-2-7b-instruction-generator 1