AlanRobotics commited on
Commit
bdd3ce2
1 Parent(s): 1a1ed0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -8,9 +8,9 @@ pipeline_tag: text-generation
8
 
9
  # Cotype-Nano🤖
10
 
11
- MTSAIR/Cotype-Nano – это легковесный ИИ на основе meta-llama/Llama-3.2-1B, разработанный для выполнения задач с минимальными ресурсами. Он оптимизирован для быстрого и эффективного взаимодействия с пользователями, обеспечивая высокую производительность даже в условиях ограниченных ресурсов.*
12
 
13
- Cotype Nano is a lightweight AI based on meta-llama/Llama-3.2-1B, designed to perform tasks with minimal resources. It is optimized for fast and efficient interaction with users, providing high performance even under resource-constrained conditions.
14
 
15
  ### Inference with vLLM
16
  ```sh
@@ -103,11 +103,11 @@ print(res[0]['generated_text'][-1]['content'])
103
 
104
  The model was trained in two stages. In the first stage, MLP layers were trained on mathematics and code. In the second stage, the entire model was trained on internal and open synthetic instructional datasets.
105
 
106
- ### ru-llm-arena: **21.3** (local measurement)
107
 
108
  | **Model** | **Score** | **95% CI** | **Avg Tokens** |
109
  | ------------------------------------------- | --------- | --------------- | -------------- |
110
- | **MTSAIR/Cotype-Nano** | **29.4** | **+1.7 / -1.6** | **616** |
111
  | storm-7b | 20.62 | +2.0 / -1.6 | 419.32 |
112
  | neural-chat-7b-v3-3 | 19.04 | +2.0 / -1.7 | 927.21 |
113
  | Vikhrmodels-Vikhr-Llama-3.2-1B-instruct | 19.04 | +1.3 / -1.6 | 958.63 |
 
8
 
9
  # Cotype-Nano🤖
10
 
11
+ MTSAIR/Cotype-Nano – это легковесный ИИ, разработанный для выполнения задач с минимальными ресурсами. Он оптимизирован для быстрого и эффективного взаимодействия с пользователями, обеспечивая высокую производительность даже в условиях ограниченных ресурсов.*
12
 
13
+ Cotype Nano is a lightweight AI, designed to perform tasks with minimal resources. It is optimized for fast and efficient interaction with users, providing high performance even under resource-constrained conditions.
14
 
15
  ### Inference with vLLM
16
  ```sh
 
103
 
104
  The model was trained in two stages. In the first stage, MLP layers were trained on mathematics and code. In the second stage, the entire model was trained on internal and open synthetic instructional datasets.
105
 
106
+ ### ru-llm-arena: **29.4** (local measurement)
107
 
108
  | **Model** | **Score** | **95% CI** | **Avg Tokens** |
109
  | ------------------------------------------- | --------- | --------------- | -------------- |
110
+ | **MTSAIR/Cotype-Nano** | **29.4** | **+1.7 / -1.6** | **588** |
111
  | storm-7b | 20.62 | +2.0 / -1.6 | 419.32 |
112
  | neural-chat-7b-v3-3 | 19.04 | +2.0 / -1.7 | 927.21 |
113
  | Vikhrmodels-Vikhr-Llama-3.2-1B-instruct | 19.04 | +1.3 / -1.6 | 958.63 |