Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
|
4 |
+
library: llama.cpp
|
5 |
+
library_link: https://github.com/ggerganov/llama.cpp
|
6 |
+
base_model:
|
7 |
+
- microsoft/phi-2
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
tags:
|
12 |
+
- nlp
|
13 |
+
- code
|
14 |
+
- gguf
|
15 |
+
---
|
16 |
+
|
17 |
+
# Phi-2 Model Card
|
18 |
+
|
19 |
+
## Model Summary
|
20 |
+
|
21 |
+
Phi-2 is a Transformer-based model with **2.7 billion** parameters. It was trained on a variety of high-quality sources, including the datasets used for [Phi-1.5](https://huggingface.co/microsoft/phi-1.5) and additional filtered web content for improved safety and educational value. This model provides a compact option for exploring safety in language models, such as toxicity reduction and controllability, making it a strong choice for research into responsible AI usage.
|
22 |
+
|
23 |
+
**Primary Specializations:**
|
24 |
+
|
25 |
+
- **QA Format**: Handles concise question-answering and analogy prompts.
|
26 |
+
- **Code Format**: Demonstrates strong performance in Python code generation.
|
27 |
+
- **Chat Format**: Limited support for multi-turn dialogues and chat-based tasks.
|
28 |
+
|
29 |
+
## Model Information
|
30 |
+
|
31 |
+
- **Architecture**: Transformer
|
32 |
+
- **Parameter Count**: 2.7B
|
33 |
+
- **Training Data**: Filtered text datasets including websites and synthetic educational resources.
|
34 |
+
- **Intended Use**: QA and Python code generation. Not recommended for broad general-purpose NLP tasks without further evaluation.
|
35 |
+
|
36 |
+
## Quantized Model Files
|
37 |
+
|
38 |
+
The following quantized files are provided for use with `llama.cpp`:
|
39 |
+
|
40 |
+
1. **f16 (16-bit float precision)**:
|
41 |
+
- `gguf` format for lower memory footprint while retaining most accuracy.
|
42 |
+
- Suitable for systems with limited GPU memory.
|
43 |
+
- **Recommended Use**: Code generation and QA tasks requiring higher accuracy.
|
44 |
+
|
45 |
+
2. **q8_0 (8-bit integer precision)**:
|
46 |
+
- Quantized for reduced storage requirements, optimized for lightweight inference.
|
47 |
+
- Recommended for CPU-based inference or setups with memory constraints.
|
48 |
+
- **Recommended Use**: Chat-style completions and small-scale experiments.
|
49 |
+
|
50 |
+
### Example: LLaMa CLI
|
51 |
+
|
52 |
+
For interactive usage, try starting a chat session using:
|
53 |
+
|
54 |
+
```bash
|
55 |
+
./build/bin/llama-cli -m /mnt/scsm/models/microsoft/phi-2/ggml-model-f32.gguf --color --seed 1337 --ctx-size 2048 --n-predict -1 --threads 8 --repeat-penalty 1.25 --n-gpu-layers 32 --conversation --prompt "My name is Phi. I am a supportive and helpful assistant." 2&> /dev/null
|
56 |
+
<|im_start|>system
|
57 |
+
My name is Phi. I am a supportive and helpful assistant.<|im_end|>
|
58 |
+
|
59 |
+
> Hello! My name is Austin. What is your name?
|
60 |
+
My name is Phi. Nice to meet you, Austin.<|im_end|>
|
61 |
+
|
62 |
+
>
|
63 |
+
```
|
64 |
+
|
65 |
+
### Example: LLaMa Server
|
66 |
+
|
67 |
+
```bash
|
68 |
+
./build/bin/llama-server -m /mnt/scsm/models/microsoft/phi-2/ggml-model-f16.gguf --port 8080 --threads $(nproc) --n-gpu-layers 32 --ctx-size 2048
|
69 |
+
```
|
70 |
+
|
71 |
+
## Performance and Intended Use
|
72 |
+
|
73 |
+
The server and client are typically user friendly options compared to the example CLI application, e.g.
|
74 |
+
|
75 |
+
```sh
|
76 |
+
python -m llama_cpp_client.client --stop "</s>,<|endoftext|>,<|im_end|>" --session-name "test-phi-2-00" --completions
|
77 |
+
```
|
78 |
+
|
79 |
+
### Example: QA Completion
|
80 |
+
|
81 |
+
```markdown
|
82 |
+
**user**
|
83 |
+
Instruct: Explain why mathematics is often called the "language of the universe."
|
84 |
+
Output:
|
85 |
+
|
86 |
+
**completion**
|
87 |
+
Instruct: Explain why mathematics is often called the "language of the universe."
|
88 |
+
Output: Mathematics is often referred to as the "language of the universe" because it provides a way to describe and understand the natural world. Just as different languages have their own grammar and vocabulary, mathematics has its own set of rules and symbols. These rules allow us to communicate complex ideas and relationships in a precise and universal manner. Mathematics provides a framework for describing and analyzing patterns, structures, and processes that exist in the universe. It allows us to make predictions, solve problems, and uncover the underlying principles that govern the world around us. From the movement of celestial bodies to the behavior of subatomic particles, mathematics provides a common language through which we can explore and understand the mysteries of the universe.
|
89 |
+
```
|
90 |
+
|
91 |
+
### Example: Code Completion
|
92 |
+
|
93 |
+
```markdown
|
94 |
+
**user**
|
95 |
+
def generate_fibonacci(n: int) -> list[int]:
|
96 |
+
"""
|
97 |
+
|
98 |
+
**completion**
|
99 |
+
def generate_fibonacci(n: int) -> list:
|
100 |
+
"""Generate the Fibonacci sequence up to n."""
|
101 |
+
result = []
|
102 |
+
a, b = 0, 1
|
103 |
+
while a < n:
|
104 |
+
result.append(a)
|
105 |
+
a, b = b, a + b
|
106 |
+
return result
|
107 |
+
```
|
108 |
+
|
109 |
+
## Known Limitations
|
110 |
+
|
111 |
+
- **Attention Overflow in FP16**: Phi-2 may experience attention overflow issues in FP16 precision. If you encounter this, consider enabling or disabling autocast in the attention mechanism (`PhiAttention.forward()`).
|
112 |
+
- **Niche Specialization**: The model has been trained for QA, chat, and Python code formats. It might not perform well on unrelated tasks or creative writing.
|
113 |
+
|
114 |
+
## Safety and Responsible Use
|
115 |
+
|
116 |
+
Phi-2 is provided under the MIT license for research and educational purposes. Users should be mindful of its limitations and evaluate outputs carefully before using them in real-world applications. The model was designed to reduce harmful completions, but it may still produce biased or undesirable results in some scenarios.
|
117 |
+
|
118 |
+
For more details, please refer to the official [license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
|