Update README.md
#14
by
MaziyarPanahi
- opened
README.md
CHANGED
@@ -120,21 +120,6 @@ model-index:
|
|
120 |
|
121 |
This model is a fine-tuned version of the powerful `Qwen/Qwen2-72B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
|
122 |
|
123 |
-
## Model Details
|
124 |
-
|
125 |
-
- **Base Model**: Qwen/Qwen2-72B-Instruct
|
126 |
-
- **Training**: Fine-tuned on a diverse dataset to enhance performance
|
127 |
-
- **Size**: 72 billion parameters
|
128 |
-
- **Language**: Multilingual (primary focus on English and Chinese)
|
129 |
-
|
130 |
-
## Key Features
|
131 |
-
|
132 |
-
- π Improved performance across all benchmarks
|
133 |
-
- π§ Enhanced reasoning and analytical capabilities
|
134 |
-
- π Better handling of complex, multi-turn conversations
|
135 |
-
- π Expanded knowledge base for more accurate and up-to-date information
|
136 |
-
- π¨ Increased creativity for open-ended tasks
|
137 |
-
|
138 |
## Use Cases
|
139 |
|
140 |
This model is suitable for a wide range of applications, including but not limited to:
|
|
|
120 |
|
121 |
This model is a fine-tuned version of the powerful `Qwen/Qwen2-72B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
|
122 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
## Use Cases
|
124 |
|
125 |
This model is suitable for a wide range of applications, including but not limited to:
|