pszemraj commited on
Commit
c881248
1 Parent(s): e906b1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -1,3 +1,28 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - HuggingFaceTB/smollm-corpus
5
+ language:
6
+ - en
7
+ pipeline_tag: text2text-generation
8
+ library_name: transformers
9
+ ---
10
+
11
+
12
+ # tFINE-850m-24x24-1024ctx
13
+
14
+
15
+ Pretrained T5 model with nanoT5:
16
+
17
+ - ~850m parameters, 24 layers in encoder, 24 layers in decoder
18
+ - sentencepiece tokenizer with 48k vocab & byte-pair fallback
19
+ - handles whitespaces etc correctly (unlike standard T5 tokenizer)
20
+ - 1024 ctx during pretrain
21
+ - `relative_attention_num_buckets` increased to 48 from standard 32 for context length upscaling
22
+
23
+ ## Experiment logs
24
+
25
+ Training consisted of two phases:
26
+
27
+ - TODO
28
+ - TODO