bluenguyen
commited on
Commit
•
ffb8dea
1
Parent(s):
ce983cc
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- vi
|
4 |
+
---
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
This model was initialized from [vinai/bartpho-word-base](https://huggingface.co/vinai/bartpho-word-base) and converted to [Allenai's Longformer Encoder-Decoder (LED)](https://github.com/allenai/longformer#longformer) based on [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf).
|
9 |
+
|
10 |
+
To be able to process 16K tokens, *bartpho-word-base*'s position embedding matrix was simply copied 16 times.
|
11 |
+
|
12 |
+
This model is especially interesting for long-range summarization and question answering.
|
13 |
+
|
14 |
+
|
15 |
+
## Fine-tuning for down-stream task
|
16 |
+
|
17 |
+
[This notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) shows how led model can effectively be fine-tuned on a downstream task.
|