Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: pytorch
|
3 |
+
license: llama3
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
tags:
|
6 |
+
- llm
|
7 |
+
- generative_ai
|
8 |
+
- quantized
|
9 |
+
- android
|
10 |
+
|
11 |
+
---
|
12 |
+
|
13 |
+
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/llama_v3_2_3b_chat_quantized/web-assets/model_demo.png)
|
14 |
+
|
15 |
+
# Llama-v3.2-3B-Chat: Optimized for Mobile Deployment
|
16 |
+
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
|
17 |
+
|
18 |
+
Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency.
|
19 |
+
|
20 |
+
This is based on the implementation of Llama-v3.2-3B-Chat found
|
21 |
+
[here]({source_repo}). More details on model performance
|
22 |
+
accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_2_3b_chat_quantized).
|
23 |
+
|
24 |
+
### Model Details
|
25 |
+
|
26 |
+
- **Model Type:** Text generation
|
27 |
+
- **Model Stats:**
|
28 |
+
- Input sequence length for Prompt Processor: 128
|
29 |
+
- Context length: 4096
|
30 |
+
- Number of parameters: 3B
|
31 |
+
- Model size: 2.4G
|
32 |
+
- Precision: w4a16 + w8a16 (few layers)
|
33 |
+
- Num of key-value heads: 8
|
34 |
+
- Model-1 (Prompt Processor): Llama-PromptProcessor-Quantized
|
35 |
+
- Prompt processor input: 128 tokens + position embeddings + attention mask + KV cache inputs
|
36 |
+
- Prompt processor output: 128 output tokens + KV cache outputs
|
37 |
+
- Model-2 (Token Generator): Llama-TokenGenerator-Quantized
|
38 |
+
- Token generator input: 1 input token + position embeddings + attention mask + KV cache inputs
|
39 |
+
- Token generator output: 1 output token + KV cache outputs
|
40 |
+
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
|
41 |
+
- Minimum QNN SDK version required: 2.27.7
|
42 |
+
- Supported languages: English.
|
43 |
+
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
|
44 |
+
- Response Rate: Rate of response generation after the first response token.
|
45 |
+
|
46 |
+
| Model | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|
47 |
+
|---|---|---|---|---|---|
|
48 |
+
| Llama-v3.2-3B-Chat | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 23.4718 | 0.088195 - 2.82225 | -- | -- |
|
49 |
+
|
50 |
+
## Deploying Llama 3.2 on-device
|
51 |
+
|
52 |
+
Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial.
|
53 |
+
|
54 |
+
|
55 |
+
|
56 |
+
## License
|
57 |
+
* The license for the original implementation of Llama-v3.2-3B-Chat can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE).
|
58 |
+
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE)
|
59 |
+
|
60 |
+
|
61 |
+
|
62 |
+
## References
|
63 |
+
* [LLaMA: Open and Efficient Foundation Language Models](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/)
|
64 |
+
* [Source Model Implementation](https://github.com/meta-llama/llama3/tree/main)
|
65 |
+
|
66 |
+
|
67 |
+
|
68 |
+
## Community
|
69 |
+
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
|
70 |
+
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
71 |
+
|
72 |
+
## Usage and Limitations
|
73 |
+
|
74 |
+
Model may not be used for or in connection with any of the following applications:
|
75 |
+
|
76 |
+
- Accessing essential private and public services and benefits;
|
77 |
+
- Administration of justice and democratic processes;
|
78 |
+
- Assessing or recognizing the emotional state of a person;
|
79 |
+
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
|
80 |
+
- Education and vocational training;
|
81 |
+
- Employment and workers management;
|
82 |
+
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
|
83 |
+
- General purpose social scoring;
|
84 |
+
- Law enforcement;
|
85 |
+
- Management and operation of critical infrastructure;
|
86 |
+
- Migration, asylum and border control management;
|
87 |
+
- Predictive policing;
|
88 |
+
- Real-time remote biometric identification in public spaces;
|
89 |
+
- Recommender systems of social media platforms;
|
90 |
+
- Scraping of facial images (from the internet or otherwise); and/or
|
91 |
+
- Subliminal manipulation
|