--- license: llama2 base_model: meta-llama/Llama-2-13b-chat-hf inference: false language: - en model_creator: Meta Llama 2 model_link: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf model_name: Llama 2 13B Chat model_type: llama pipeline_tag: text-generation quantized_by: FriendliAI tags: - facebook - meta - pytorch - llama - llama-2 arxiv: 2307.09288 ---
# Llama 2 13B Chat - FP8 - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama) - Original model: [Llama 2 13B Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) ## Description This repo contains the Llama 2 13B chat model quantized to FP8 by FriendliAI, significantly enhancing its inference efficiency while maintaining high accuracy. Note that FP8 is only supported by NVIDIA Ada, Hopper, and Blackwell GPU architectures. Check out [FriendliAI documentation](https://docs.friendli.ai/) for more details. ## License Refer to the license of the original model card. ## Compatibility This model is compatible with **[Friendli Container](https://friendli.ai/products/container/)**. ## Prerequisites - Before you begin, make sure you have signed up for [Friendli Suite](https://suite.friendli.ai/). **You can use Friendli Containers free of charge for four weeks.** - Prepare a Personal Access Token following [this guide](#preparing-personal-access-token). - Prepare a Friendli Container Secret following [this guide](#preparing-container-secret). - Install Hugging Face CLI with `pip install -U "huggingface_hub[cli]"` ### Preparing Personal Access Token PAT (Personal Access Token) is the user credential for for logging into our container registry. 1. Sign in [Friendli Suite](https://suite.friendli.ai/). 2. Go to **[User Settings > Tokens](https://suite.friendli.ai/user-settings/tokens)** and click **'Create new token'**. 3. Save your created token value. ### Preparing Container Secret Container secret is a credential to launch our Friendli Container images. You should pass the container secret as an environment variable to run the container image. 1. Sign in [Friendli Suite](https://suite.friendli.ai/). 2. Go to **Container > Container Secrets** and click **'Create secret'**. 3. Save your created secret value. ### Pulling Friendli Container Image 1. Log in to the Docker client using the personal access token created as outlined in [this guide](#preparing-personal-access-token). ```sh export FRIENDLI_PAT="YOUR PAT" docker login registry.friendli.ai -u $YOUR_EMAIL -p $FRIENDLI_PAT ``` 2. Pull image ```sh docker pull registry.friendli.ai/trial ``` ## Running Friendli Container Once you've prepared the image of Friendli Container, you can launch it to create a serving endpoint. ```sh export MODEL_DIR=$PWD/FriendliAI--Llama-2-13b-chat-hf-fp8 export FRIENDLI_CONTAINER_SECRET="YOUR CONTAINER SECRET" export FRIENDLI_CONTAINER_IMAGE="registry.friendli.ai/trial" export GPU_ENUMERATION='"device=0"' huggingface-cli download FriendliAI/Llama-2-13b-chat-hf-fp8 \ --local-dir $MODEL_DIR \ --local-dir-use-symlinks False docker run \ --gpus $GPU_ENUMERATION --network=host --ipc=host \ -v $MODEL_DIR:/model \ -e FRIENDLI_CONTAINER_SECRET=$FRIENDLI_CONTAINER_SECRET \ $FRIENDLI_CONTAINER_IMAGE /bin/bash -c \ "/root/launcher \ --web-server-port 6000 \ --ckpt-path /model \ --ckpt-type hf_safetensors" ``` --- # Original model card: Meta Llama 2's Llama 2 13B Chat # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10-4| |Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10-4| |Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10-4| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](https://arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<