Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ Pretrained BigBird Model for Korean (**kobigbird-bert-base**)
|
|
10 |
|
11 |
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences.
|
12 |
|
13 |
-
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT.
|
14 |
|
15 |
Model is warm started from Korean BERT’s checkpoint.
|
16 |
|
|
|
10 |
|
11 |
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences.
|
12 |
|
13 |
+
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT.
|
14 |
|
15 |
Model is warm started from Korean BERT’s checkpoint.
|
16 |
|