Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,29 @@ source_datasets: tau/scrolls
|
|
6 |
---
|
7 |
# qmsum-cleaned
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
## token counts
|
10 |
|
11 |
![counts](https://i.imgur.com/rARAOvr.png)
|
|
|
6 |
---
|
7 |
# qmsum-cleaned
|
8 |
|
9 |
+
|
10 |
+
|
11 |
+
## prefixes
|
12 |
+
|
13 |
+
|
14 |
+
It's worth noting that each "document" in `input` is prefixed by a question/prompt on what the model is supposed to do. **You may want to explicitly handle this in some way, or prefix your models trained on this dataset.**
|
15 |
+
|
16 |
+
|
17 |
+
Most frequent "prefixes" separated via [sentence-splitter](https://github.com/mediacloud/sentence-splitter) in the `train` split:
|
18 |
+
|
19 |
+
| | Sentence | Count |
|
20 |
+
|---:|:------------------------------------------------------------------------------|--------:|
|
21 |
+
| 0 | Summarize the whole meeting. | 121 |
|
22 |
+
| 1 | Summarize the meeting | 25 |
|
23 |
+
| 2 | What did the team discuss about the product cost? | 4 |
|
24 |
+
| 3 | How did Marketing design the product evaluation? | 4 |
|
25 |
+
| 4 | Summarize the wrap up of the meeting. | 3 |
|
26 |
+
| 5 | What did the group discuss about user requirements of the new remote control? | 3 |
|
27 |
+
| 6 | What did the team discuss during the product evaluation? | 3 |
|
28 |
+
| 7 | Summarize the meeting. | 2 |
|
29 |
+
| 8 | Summarize what was said about digits form | 2 |
|
30 |
+
| 9 | What was discussed in the meeting? | 2 |
|
31 |
+
|
32 |
## token counts
|
33 |
|
34 |
![counts](https://i.imgur.com/rARAOvr.png)
|