privetin commited on
Commit
fa95361
1 Parent(s): b6de0cb

Initial commit of custom summarization dataset

Browse files
README.md CHANGED
@@ -1,10 +1,51 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - summarization
5
- language:
6
- - en
7
- pretty_name: Custom CNN/Daily Mail Summarization Dataset
8
- size_categories:
9
- - n<1K
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Dataset Card for Custom Text Dataset
3
+
4
+ ## Dataset Name
5
+ Custom CNN/Daily Mail Summarization Dataset
6
+
7
+ ## Overview
8
+ This dataset is a custom version of the CNN/Daily Mail dataset, designed for text summarization tasks. It contains news articles and their corresponding summaries.
9
+
10
+ ## Composition
11
+ The dataset consists of two splits:
12
+ - Train: 1 custom example
13
+ - Test: 100 examples from the original CNN/Daily Mail dataset
14
+
15
+ Each example contains:
16
+ - 'sentence': The full text of a news article
17
+ - 'labels': The summary of the article
18
+
19
+ ## Collection Process
20
+ The training data is a custom example created manually, while the test data is sampled from the CNN/Daily Mail dataset (version 3.0.0) available on Hugging Face.
21
+
22
+ ## Preprocessing
23
+ No specific preprocessing was applied beyond the original CNN/Daily Mail dataset preprocessing.
24
+
25
+ ## How to Use
26
+ ```python
27
+ from datasets import load_from_disk
28
+
29
+ # Load the dataset
30
+ dataset = load_from_disk("./results/custom_dataset/")
31
+
32
+ # Access the data
33
+ train_data = dataset['train']
34
+ test_data = dataset['test']
35
+
36
+ # Example usage
37
+ print(train_data['sentence'])
38
+ print(train_data['labels'])
39
+ ```
40
+
41
+ ## Evaluation
42
+ This dataset is intended for text summarization tasks. Common evaluation metrics include ROUGE scores, which measure the overlap between generated summaries and reference summaries.
43
+
44
+ ## Limitations
45
+ - The training set is extremely small (1 example), which may limit its usefulness for model training.
46
+ - The test set is a subset of the original CNN/Daily Mail dataset, which may not represent the full diversity of news articles.
47
+
48
+ ## Ethical Considerations
49
+ - The dataset contains news articles, which may include sensitive or biased content.
50
+ - Users should be aware of potential copyright issues when using news content for model training or deployment.
51
+ - Care should be taken to avoid generating or propagating misleading or false information when using models trained on this dataset.
test/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["test"]}
test/test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6aa13a3e10a33624931f6c220c9618528323886bd7b7ac334af681b8dc0646
3
+ size 346576
test/test/dataset_info.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "sentence": {
6
+ "feature": {
7
+ "dtype": "string",
8
+ "_type": "Value"
9
+ },
10
+ "_type": "Sequence"
11
+ },
12
+ "labels": {
13
+ "feature": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "_type": "Sequence"
18
+ }
19
+ },
20
+ "homepage": "",
21
+ "license": ""
22
+ }
test/test/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a966e5e39a3a551f",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
train/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train"]}
train/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3b84a293ed7afd9641f578c760558feab774e12174775ffef3bd6d130873903
3
+ size 1400
train/train/dataset_info.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "sentence": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "labels": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ }
13
+ },
14
+ "homepage": "",
15
+ "license": ""
16
+ }
train/train/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a1df46296853828f",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }