SaisExperiments
commited on
Commit
•
cb87483
1
Parent(s):
ccc33ca
Fix context length in model card
Browse filesI'm like 99% sure gemma 2 2b is 8k :3
README.md
CHANGED
@@ -33,7 +33,7 @@ Our appreciation for the sponsors of Dolphin 2.9.4:
|
|
33 |
|
34 |
This model is based on Google Gemma2 2b, and is governed by the Gemma license.
|
35 |
|
36 |
-
The base model has
|
37 |
|
38 |
`ollama run CognitiveComputations/dolphin-gemma2:2b`
|
39 |
|
|
|
33 |
|
34 |
This model is based on Google Gemma2 2b, and is governed by the Gemma license.
|
35 |
|
36 |
+
The base model has 8K context, and our finetuning used 8192 sequence length.
|
37 |
|
38 |
`ollama run CognitiveComputations/dolphin-gemma2:2b`
|
39 |
|