--- license: llama2 library_name: peft tags: - llama-2 datasets: - ehartford/dolphin inference: false pipeline_tag: text-generation base_model: meta-llama/Llama-2-13b-hf --- # llama-2-13b-dolphin 🦙🐬 This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the first 100k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single A6000 (48 GB) for roughly 18 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform. ### Benchmark metrics | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 55.76 | | ARC (25-shot) | 59.98 | | HellaSwag (10-shot) | 82.43 | | TruthfulQA (0-shot) | 42.59 | | Avg. | 60.19 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as Hugging Face's [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). ### Helpful Links * Model license: Llama 2 Community License Agreement * Basic usage: [notebook](assets/basic_inference_llama_2_13b_dolphin.ipynb) * Finetuning script: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-13b-dolphin-peft.py) * Loss curves: [plot](https://huggingface.co/dfurman/llama-2-13b-dolphin-peft#finetuning-description) * Runtime stats: [table](https://huggingface.co/dfurman/llama-2-13b-dolphin-peft#runtime-tests) ### Example prompts and responses Example 1: **User**: >You are a helpful assistant. Write me a numbered list of things to do in New York City.\n **llama-2-13b-dolphin-peft**: > 1. Visit the iconic Statue of Liberty and Ellis Island. > 2. Take a stroll through Central Park and enjoy its many attractions. > 3. Explore the world-renowned museums, such as the Metropolitan Museum of Art and the Museum of Modern Art. > 4. Experience the vibrant energy of Times Square and take in the bright lights and billboards. > 5. Visit the 9/11 Memorial and Museum to pay tribute to those who lost their lives in the attacks. > 6. Enjoy a Broadway show or a concert at one of the many theaters and venues in the city. > 7. Take a ride on the Staten Island Ferry for a free view of the Statue of Liberty and the New York City skyline. > 8. Shop at the famous Fifth Avenue stores and explore the high-end boutiques. > 9. Indulge in a variety of cuisines at one of the many restaurants in the city. > 10. Visit the Empire State Building and enjoy the panoramic views of the city from the observation deck.
Example 2: **User**: >You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n" **llama-2-13b-dolphin-peft**: > Subject: Friday Dinner Party Invitation > > Dear Friends, > > I hope this email finds you well. I'm excited to invite you all to a dinner party on Friday, March 10th, at 7:00 PM. The address is 123 Main Street, Anytown, USA. > > Please RSVP by Wednesday, March 8th, so I can plan accordingly. I look forward to seeing you all and sharing a delicious meal together! > > Best, > Your Friendly Assistant > > P.S. If you have any dietary restrictions or allergies, please let me know in your RSVP. Thank you!
## Model Description The architecture is a modification of a standard decoder-only transformer. The llama-2-13b models have been modified from a standard transformer in the following ways: * It uses the [SwiGLU activation function](https://arxiv.org/abs/2002.05202) * It uses [rotary positional embeddings](https://arxiv.org/abs/2104.09864) (RoPE) | Hyperparameter | Value | |----------------|-------| | n_parameters | 13B | | tokens | 2.0T | | vocab size | 32000 | | sequence length | 4096 | ## Finetuning Description This model was trained on a single A6000 (48 GB) for about 18 hours using the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform. ![loss curves](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/jul_24_23_1_13_00_log_loss_curves_llama-2-13b-dolphin.png) The above loss curve was generated from the run's private wandb.ai log. ## PreTraining Data For more details on the pretraining process, see [Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf). The data was tokenized using the [Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) tokenizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## How to Use Basic usage: [notebook](assets/basic_inference_llama_2_13b_dolphin.ipynb) Install and import the package dependencies: ```python !pip install -q -U huggingface_hub peft transformers torch accelerate ``` ```python import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig ``` Sign into a HF account with access to Llama-2: ```python from huggingface_hub import notebook_login notebook_login() ``` Basic model loading: ```python peft_model_id = "dfurman/llama-2-13b-dolphin-peft" config = PeftConfig.from_pretrained(peft_model_id) tokenizer = AutoTokenizer.from_pretrained( config.base_model_name_or_path, use_auth_token=True ) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, torch_dtype=torch.bfloat16, device_map="auto", use_auth_token=True, ) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) ``` Once loaded, the model and tokenizer can be used with the following code: ```python def llama_generate( model: AutoModelForCausalLM, tokenizer: AutoTokenizer, prompt: str, max_new_tokens: int = 128, temperature: float = 0.92, ) -> str: """ Initialize the pipeline Uses Hugging Face GenerationConfig defaults https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/text_generation#transformers.GenerationConfig Args: model (transformers.AutoModelForCausalLM): Falcon model for text generation tokenizer (transformers.AutoTokenizer): Tokenizer for model prompt (str): Prompt for text generation max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128. temperature (float, optional): The value used to modulate the next token probabilities. Defaults to 1.0 """ device = torch.device("cuda" if torch.cuda.is_available() else "cpu") inputs = tokenizer( [prompt], return_tensors="pt", return_token_type_ids=False, ).to( device ) # tokenize inputs, load on device # when running Torch modules in lower precision, it is best practice to use the torch.autocast context manager. with torch.autocast("cuda", dtype=torch.bfloat16): response = model.generate( **inputs, max_new_tokens=max_new_tokens, temperature=temperature, return_dict_in_generate=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, ) decoded_output = tokenizer.decode( response["sequences"][0], skip_special_tokens=True, ) # grab output in natural language return decoded_output[len(prompt) :] # remove prompt from output ``` We can now generate text! For example: ```python prompt = "### Human: Write me a numbered list of things to do in New York City.### Assistant: " response = llama_generate( model, tokenizer, prompt, max_new_tokens=250, temperature=0.92, ) print(response) ``` ### Runtime tests | runtime / 50 tokens (sec) | GPU | attn | torch dtype | VRAM (GB) | |:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:| | 2.93 | 1x A100 (40 GB SXM) | torch | bfloat16 | 25 | | 3.24 | 1x A6000 (48 GB) | torch | bfloat16 | 25 | The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-llama-2-13b-dolphin-peft.ipynb). ## Acknowledgements This model was finetuned by Daniel Furman on July 22, 2023 and is intended primarily for research purposes. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Meta citation for llama-2 blog ``` @online{Meta2023Introducing, author = {Meta AI}, title = {Meta and Microsoft Introduce the Next Generation of Llama}, year = {2023}, url = {https://about.fb.com/news/2023/07/llama-2/}, note = {Accessed: 2023-07-24}, urldate = {2023-07-24} } ``` --- ### Framework versions - PEFT 0.5.0.dev0