DeepMount00
commited on
Commit
•
10af8a0
1
Parent(s):
fd77bd3
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
library_name: transformers
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Qwen2 1.5B: Almost the Same Performance as ITALIA (iGenius) but 6 Times Smaller 🚀
|
@@ -33,5 +35,4 @@ Sure, here are the results in a markdown table:
|
|
33 |
|
34 |
### Conclusion
|
35 |
|
36 |
-
Qwen2 1.5B demonstrates that a smaller, more efficient model can achieve performance levels comparable to much larger models. It excels in the MMLU benchmark, showing its strength in multitask language understanding. While it scores slightly lower in the ARC and HELLASWAG benchmarks, its overall performance makes it a viable option for Italian language tasks, offering a balance between efficiency and capability.
|
37 |
-
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
library_name: transformers
|
4 |
+
language:
|
5 |
+
- it
|
6 |
---
|
7 |
|
8 |
# Qwen2 1.5B: Almost the Same Performance as ITALIA (iGenius) but 6 Times Smaller 🚀
|
|
|
35 |
|
36 |
### Conclusion
|
37 |
|
38 |
+
Qwen2 1.5B demonstrates that a smaller, more efficient model can achieve performance levels comparable to much larger models. It excels in the MMLU benchmark, showing its strength in multitask language understanding. While it scores slightly lower in the ARC and HELLASWAG benchmarks, its overall performance makes it a viable option for Italian language tasks, offering a balance between efficiency and capability.
|
|