Datasets:

Languages:
Vietnamese
ArXiv:
License:
holylovenia commited on
Commit
8d280bb
1 Parent(s): 3b2dcca

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  ---
12
 
13
 
14
- The Vietnamese dataset for social context summarization The dataset contains 141 open-domain articles along with 3,760 sentences, 2,448 extracted standard sentences and comments as standard summaries and 6,926 comments in 12 events. This dataset was manually annotated by human. Note that the extracted standard summaries also include comments. The label of a sentence or comment was generated based on the voting among social annotators. For example, given a sentence, each annotator makes a binary decision in order to indicate that whether this sentence is a summary candidate (YES) or not (NO). If three annotators agree yes, this sentences is labeled by 3. Therefore, the label of each sentence or comment ranges from 1 to 5 (1: very poor, 2: poor, 3: fair, 4: good; 5: perfect). The standard summary sentences are those which receive at least three agreements from annotators. The inter-agreement calculated by Cohen's Kappa after validation among annotators is 0.685.
15
 
16
 
17
  ## Languages
@@ -21,25 +21,25 @@ vie
21
  ## Supported Tasks
22
 
23
  Summarization
24
-
25
  ## Dataset Usage
26
  ### Using `datasets` library
27
  ```
28
- from datasets import load_dataset
29
- dset = datasets.load_dataset("SEACrowd/vsolscsum", trust_remote_code=True)
30
  ```
31
  ### Using `seacrowd` library
32
  ```import seacrowd as sc
33
  # Load the dataset using the default config
34
- dset = sc.load_dataset("vsolscsum", schema="seacrowd")
35
  # Check all available subsets (config names) of the dataset
36
- print(sc.available_config_names("vsolscsum"))
37
  # Load the dataset using a specific config
38
- dset = sc.load_dataset_by_config_name(config_name="<config_name>")
39
  ```
40
-
41
- More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
42
-
43
 
44
  ## Dataset Homepage
45
 
 
11
  ---
12
 
13
 
14
+ The Vietnamese dataset for social context summarizationThe dataset contains 141 open-domain articles along with3,760 sentences, 2,448 extracted standard sentences andcomments as standard summaries and 6,926 comments in 12events. This dataset was manually annotated by human.Note that the extracted standard summaries also include comments.The label of a sentence or comment was generated based on thevoting among social annotators. For example, given a sentence,each annotator makes a binary decision in order to indicatethat whether this sentence is a summary candidate (YES) or not(NO). If three annotators agree yes, this sentences is labeled by 3.Therefore, the label of each sentence or comment ranges from 1 to 5(1: very poor, 2: poor, 3: fair, 4: good; 5: perfect). The standardsummary sentences are those which receive at least three agreementsfrom annotators. The inter-agreement calculated by Cohen's Kappaafter validation among annotators is 0.685.
15
 
16
 
17
  ## Languages
 
21
  ## Supported Tasks
22
 
23
  Summarization
24
+
25
  ## Dataset Usage
26
  ### Using `datasets` library
27
  ```
28
+ from datasets import load_dataset
29
+ dset = datasets.load_dataset("SEACrowd/vsolscsum", trust_remote_code=True)
30
  ```
31
  ### Using `seacrowd` library
32
  ```import seacrowd as sc
33
  # Load the dataset using the default config
34
+ dset = sc.load_dataset("vsolscsum", schema="seacrowd")
35
  # Check all available subsets (config names) of the dataset
36
+ print(sc.available_config_names("vsolscsum"))
37
  # Load the dataset using a specific config
38
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
39
  ```
40
+
41
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
42
+
43
 
44
  ## Dataset Homepage
45