jonabur commited on
Commit
30bcf69
1 Parent(s): e5f4c86

update readme

Browse files
Files changed (2) hide show
  1. README.md +45 -33
  2. viking.png +0 -0
README.md CHANGED
@@ -14,21 +14,29 @@ language:
14
  - is
15
  ---
16
 
17
- <div align="center">
18
- <img src="./viking.png" width="200px">
19
- </div>
20
 
21
- # Viking 33B Model Card
22
 
23
  _**NOTE:** This is a **research checkpoint** of a model for which **training has not been completed.** It is being provided in its current state for research and testing purposes. **Care should be taken when using the outputs of the model.** Once pretraining has completed we intend to release additional instruction-tuned and chat-tuned varieties._
24
 
25
- Viking 33B is a 33B parameter decoder-only transformer pretrained on Finnish, English, Swedish, Danish, Norwegian, Icelandic and code. It is being trained on 2 trillion tokens (700B billion as of this release). Viking 33B is a fully open source model and is made available under the Apache 2.0 License.
 
 
26
 
27
- Viking was created in a collaboration between [SiloGen](https://www.silo.ai/silogen) from [Silo AI](https://www.silo.ai/), the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland.
28
 
29
  This project is part of an ongoing effort to create open source large language models for non-English and especially low resource languages like Finnish. The mode is fluent in Finnish, English, the Scandinavian languages and capable of basic translation between them. It is also able to understand and generate code.
30
 
31
- Viking 33B is part of the second set of models of our model family. Work is also underway on our next models which will support even more languages and grouped query attention.
 
 
 
 
 
 
 
 
 
32
 
33
  ## Model Overview
34
  _**NOTE:** In addition to being an early research release, Viking is a base model which needs further fine tuning for most use cases._
@@ -44,32 +52,6 @@ Viking is a generative pretrained transformer using a LLaMA-like GPT architectur
44
  | vocab_size | 131072 |
45
  | sequence_length | 4096 |
46
 
47
- ## Viking Research Checkpoints
48
-
49
- Checkpoints are available as branches in the repository. Checkpoints will be released roughly every 100B tokens. The main branch will always point to the latest checkpoint. The following checkpoints are available:
50
-
51
- * [100B](https://huggingface.co/LumiOpen/Viking-33B/tree/100B)
52
- * [200B](https://huggingface.co/LumiOpen/Viking-33B/tree/200B)
53
- * [300B](https://huggingface.co/LumiOpen/Viking-33B/tree/300B)
54
- * [400B](https://huggingface.co/LumiOpen/Viking-33B/tree/400B)
55
- * [500B](https://huggingface.co/LumiOpen/Viking-33B/tree/500B)
56
- * [600B](https://huggingface.co/LumiOpen/Viking-33B/tree/600B)
57
- * [700B](https://huggingface.co/LumiOpen/Viking-33B/tree/700B)
58
- * [800B](https://huggingface.co/LumiOpen/Viking-33B/tree/800B)
59
- * [900B](https://huggingface.co/LumiOpen/Viking-33B/tree/900B)
60
- * [1000B](https://huggingface.co/LumiOpen/Viking-33B/tree/1000B)
61
-
62
- The transformers library allows you to load a checkpoint from a branch as follows:
63
-
64
- ```python
65
- branch = "200B"
66
- model = transformers.AutoModelForCausalLM.from_pretrained(
67
- "LumiOpen/Viking-33B",
68
- torch_dtype=torch.bfloat16,
69
- revision=branch,
70
- )
71
- ```
72
-
73
  ## Training
74
 
75
  Viking 33B was trained on the LUMI supercomputer, using 1024 AMD MI250X GPUs. Each MI250X GPU has two Graphics Complex Dies (GCDs) for a world size of 2048 during training, using activation checkpointing, a micro batch size of 1, gradient accumulation of 16, and a 3D parallelism strategy of TP=4, PP=4, DP=128.
@@ -101,6 +83,36 @@ Full details will be published soon.
101
 
102
  Full evaluation results will be published with the final model.
103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  ## Ethical Considerations and Limitations
105
 
106
  _Viking 33B is a release of a partially trained model, and special care should be taken when using any output._
 
14
  - is
15
  ---
16
 
 
 
 
17
 
18
+ # Viking 33B
19
 
20
  _**NOTE:** This is a **research checkpoint** of a model for which **training has not been completed.** It is being provided in its current state for research and testing purposes. **Care should be taken when using the outputs of the model.** Once pretraining has completed we intend to release additional instruction-tuned and chat-tuned varieties._
21
 
22
+ Viking 33B is a 33B parameter decoder-only transformer pretrained on Finnish,
23
+ English, Swedish, Danish, Norwegian, Icelandic and code. It is being trained
24
+ on 2 trillion tokens (1300B billion as of this release). Viking 33B is a fully open source model and is made available under the Apache 2.0 License.
25
 
26
+ Viking was created in a collaboration between the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, [SiloGen](https://www.silo.ai/silogen) from [Silo AI](https://www.silo.ai/),and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland.
27
 
28
  This project is part of an ongoing effort to create open source large language models for non-English and especially low resource languages like Finnish. The mode is fluent in Finnish, English, the Scandinavian languages and capable of basic translation between them. It is also able to understand and generate code.
29
 
30
+ ## Model Family
31
+
32
+ Viking is the second set of models released by LumiOpen and is available at
33
+ 3 parameter counts:
34
+
35
+ [Viking 7B](https://huggingface.co/LumiOpen/Viking-7B)
36
+
37
+ [Viking 13B](https://huggingface.co/LumiOpen/Viking-13B)
38
+
39
+ [Viking 33B](https://huggingface.co/LumiOpen/Viking-33B)
40
 
41
  ## Model Overview
42
  _**NOTE:** In addition to being an early research release, Viking is a base model which needs further fine tuning for most use cases._
 
52
  | vocab_size | 131072 |
53
  | sequence_length | 4096 |
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ## Training
56
 
57
  Viking 33B was trained on the LUMI supercomputer, using 1024 AMD MI250X GPUs. Each MI250X GPU has two Graphics Complex Dies (GCDs) for a world size of 2048 during training, using activation checkpointing, a micro batch size of 1, gradient accumulation of 16, and a 3D parallelism strategy of TP=4, PP=4, DP=128.
 
83
 
84
  Full evaluation results will be published with the final model.
85
 
86
+ ## Training Checkpoints
87
+
88
+ Training checkpoints are available as branches in the repository. Checkpoints will be released roughly every 100B tokens. The main branch will always point to the latest checkpoint. The following checkpoints are available:
89
+
90
+ * [100B](https://huggingface.co/LumiOpen/Viking-33B/tree/100B)
91
+ * [200B](https://huggingface.co/LumiOpen/Viking-33B/tree/200B)
92
+ * [300B](https://huggingface.co/LumiOpen/Viking-33B/tree/300B)
93
+ * [400B](https://huggingface.co/LumiOpen/Viking-33B/tree/400B)
94
+ * [500B](https://huggingface.co/LumiOpen/Viking-33B/tree/500B)
95
+ * [600B](https://huggingface.co/LumiOpen/Viking-33B/tree/600B)
96
+ * [700B](https://huggingface.co/LumiOpen/Viking-33B/tree/700B)
97
+ * [800B](https://huggingface.co/LumiOpen/Viking-33B/tree/800B)
98
+ * [900B](https://huggingface.co/LumiOpen/Viking-33B/tree/900B)
99
+ * [1000B](https://huggingface.co/LumiOpen/Viking-33B/tree/1000B)
100
+ * [1100B](https://huggingface.co/LumiOpen/Viking-33B/tree/1100B)
101
+ * [1200B](https://huggingface.co/LumiOpen/Viking-33B/tree/1200B)
102
+ * [1300B](https://huggingface.co/LumiOpen/Viking-33B/tree/1300B)
103
+
104
+ The transformers library allows you to load a checkpoint from a branch as follows:
105
+
106
+ ```python
107
+ branch = "200B"
108
+ model = transformers.AutoModelForCausalLM.from_pretrained(
109
+ "LumiOpen/Viking-33B",
110
+ torch_dtype=torch.bfloat16,
111
+ revision=branch,
112
+ )
113
+ ```
114
+
115
+
116
  ## Ethical Considerations and Limitations
117
 
118
  _Viking 33B is a release of a partially trained model, and special care should be taken when using any output._
viking.png DELETED
Binary file (187 kB)