parquet-converter commited on
Commit
b690c79
1 Parent(s): d653ddb

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,31 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- final_dataset.csv filter=lfs diff=lfs merge=lfs -text
29
- evaluation.csv filter=lfs diff=lfs merge=lfs -text
30
- train.csv filter=lfs diff=lfs merge=lfs -text
31
- test.csv filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/validation-00000-of-00001.parquet → GonzaloA--fake_news/parquet-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ebb16798a2dee69558495c3641a68cdab738dd190e464ae50d288b5013c5e84a
3
- size 12986224
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f6ccb3103fb78e6084262d9c0af8ab4db7347dae55f9bbcfe51b5eccf57f011
3
+ size 13070752
evaluation.csv → GonzaloA--fake_news/parquet-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ebaeb8e471275fdcb5da316a6bcfaf978c6ff6147be6148e44b00ca0f09717c
3
- size 20476561
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f9863bb585e1b0bb47bec8400361c30f37b105ddf292346126cc42f9127323e
3
+ size 38833345
data/test-00000-of-00001.parquet → GonzaloA--fake_news/parquet-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f34a71018d51feda81fc657432c3287c3b09af6b5a3b1b67a945938d1a57999
3
- size 13044162
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7db8c90296ef20c5535f98e038eabe9fa4dbecc8f83db8e768844edc1cee6683
3
+ size 13021472
README.md DELETED
@@ -1,160 +0,0 @@
1
- TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
2
- ---
3
- annotations_creators:
4
- - no-annotation
5
- language_creators:
6
- - found
7
- language:
8
- - en
9
- license:
10
- - unknown
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 30k<n<50k
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text-classification
19
- task_ids:
20
- - fact-checking
21
- - intent-classification
22
- pretty_name: GonzaloA / Fake News
23
- ---
24
-
25
- # Dataset Card for [Fake_News_TFG]
26
-
27
- ## Table of Contents
28
- - [Table of Contents](#table-of-contents)
29
- - [Dataset Description](#dataset-description)
30
- - [Dataset Summary](#dataset-summary)
31
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
- - [Languages](#languages)
33
- - [Dataset Structure](#dataset-structure)
34
- - [Data Instances](#data-instances)
35
- - [Data Fields](#data-fields)
36
- - [Data Splits](#data-splits)
37
- - [Dataset Creation](#dataset-creation)
38
- - [Curation Rationale](#curation-rationale)
39
- - [Source Data](#source-data)
40
- - [Annotations](#annotations)
41
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
- - [Considerations for Using the Data](#considerations-for-using-the-data)
43
- - [Social Impact of Dataset](#social-impact-of-dataset)
44
- - [Discussion of Biases](#discussion-of-biases)
45
- - [Other Known Limitations](#other-known-limitations)
46
- - [Additional Information](#additional-information)
47
- - [Dataset Curators](#dataset-curators)
48
- - [Licensing Information](#licensing-information)
49
- - [Citation Information](#citation-information)
50
- - [Contributions](#contributions)
51
-
52
- ## Dataset Description
53
-
54
- - **Homepage:**
55
- - **Repository:** [GonzaloA / fake_news]
56
- - **Paper:** [Título del TFG]
57
- - **Leaderboard:**
58
- - **Point of Contact:** [Gonzalo Álvarez Hervás](mailto:[email protected])
59
-
60
- ### Dataset Summary
61
-
62
- The GonzaloA / Fake_News_TFG Dataset repository is an English-language dataset containing just over 45k unique news articles. This articles are classified as true (1) or false (0). The current version is the first of the study the Fake News identification using Transformers models.
63
-
64
- ### Supported Tasks and Leaderboards
65
-
66
- [More Information Needed]
67
-
68
- ### Languages
69
-
70
- The dataset is code for English as generally spoken in the United States is en-US
71
-
72
- ## Dataset Structure
73
-
74
- The structure of this dataSet is composed by 40587 fields about News. This fields are composed by three types of fields; title of the news, the text or content of the news, and finally, the value of the news, who said if the new are fake (0) or true (1).
75
-
76
- ### Data Instances
77
-
78
- For each instance, there is a string for the title, a string for the article and a label to mark if it's true or false. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=fake_news&config=3.0.0) to explore more examples.
79
-
80
- ```
81
- {'id': '1',
82
- 'title': Palestinians switch off Christmas lights in Bethlehem in anti-Trump protest'
83
- 'text': 'RAMALLAH, West Bank (Reuters) - Palestinians switched off Christmas lights at Jesus traditional birthplace in Bethlehem on Wednesday night in protest at U.S. President Donald Trump s decision to recognize Jerusalem as Israel s capital. A Christmas tree adorned with lights outside Bethlehem s Church of the Nativity, where Christians believe Jesus was born, and another in Ramallah, next to the burial site of former Palestinian leader Yasser Arafat, were plunged into darkness. The Christmas tree was switched off on the order of the mayor today in protest at Trump s decision, said Fady Ghattas, Bethlehem s municipal media officer. He said it was unclear whether the illuminations would be turned on again before the main Christmas festivities. In a speech in Washington, Trump said he had decided to recognize Jerusalem as Israel s capital and move the U.S. embassy to the city. Israeli Prime Minister Benjamin Netanyahu said Trump s move marked the beginning of a new approach to the Israeli-Palestinian conflict and said it was an historic landmark . Arabs and Muslims across the Middle East condemned the U.S. decision, calling it an incendiary move in a volatile region and the European Union and United Nations also voiced alarm at the possible repercussions for any chances of reviving Israeli-Palestinian peacemaking.'
84
- 'label': '1'}
85
- ```
86
-
87
- ### Data Fields
88
-
89
- - `id`: an integer value to count the rows in the dataset
90
- - `title`: a string that summarize the article
91
- - `text`: a string that contains the article
92
- - `label`: a boolean that mark the article true or false
93
-
94
- ### Data Splits
95
-
96
- The GonzaloA/FakeNews dataset has 3 splits: train, evaluation, and test. Below are the statistics for the version 1.0 of the dataset:
97
-
98
- | Dataset Split | Number of Instances in Split |
99
- | ------------- | ------------------------------------------- |
100
- | Train | 24,353 |
101
- | Validation | 8,117 |
102
- | Test | 8,117 |
103
-
104
- ## Dataset Creation
105
-
106
- This dataset was created with python, using pandas library as the main processing data. Also, this dataset are the mix of other datasets which are the same scope, the Fake News. All of the process is available in this repository: https://github.com/G0nz4lo-4lvarez-H3rv4s/FakeNewsDetection
107
-
108
- ### Source Data
109
- The source data is a mix of multiple fake_news datasets in Kaggle, a platform for train your skills and learnings about Artificial Intelligence. The main datasets who are based this dataset are:
110
-
111
- #### Initial Data Collection and Normalization
112
-
113
- Version 1.0.0 aimed to support supervised neural methodologies for deep learning and study the new Transformers models in the Natural Language Processing with News of the United States.
114
-
115
- ### Annotations
116
-
117
- #### Annotation process
118
-
119
- [More Information Needed]
120
-
121
- #### Who are the annotators?
122
-
123
- [More Information Needed]
124
-
125
- ### Personal and Sensitive Information
126
-
127
- [More Information Needed]
128
-
129
- ## Considerations for Using the Data
130
- This Dataset is compose for 3 types: Training phase, for training your model of NLP, validation phase, because we need to validate if the training was successful or our model has overfitting, and train phase, who count the probability and the mistakes in the model fine-tuning.
131
-
132
- ### Social Impact of Dataset
133
-
134
- [More Information Needed]
135
-
136
- ### Discussion of Biases
137
-
138
- [More Information Needed]
139
-
140
- ### Other Known Limitations
141
-
142
- [More Information Needed]
143
-
144
- ## Additional Information
145
-
146
- ### Dataset Curators
147
-
148
- [More Information Needed]
149
-
150
- ### Licensing Information
151
-
152
- [More Information Needed]
153
-
154
- ### Citation Information
155
-
156
- [More Information Needed]
157
-
158
- ### Contributions
159
-
160
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/train-00000-of-00001.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:13f6fccc352958149aa1c752f2ad8acd054d5331092c322fb0b0e7b9e53b29d3
3
- size 38790951
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"GonzaloA--fake_news": {"description": "", "citation": "", "homepage": "", "license": "", "features": {"Unnamed: 0": {"dtype": "int64", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "int64", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "csv", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 62966324, "num_examples": 24353, "dataset_name": "fake_news"}, "validation": {"name": "validation", "num_bytes": 21099303, "num_examples": 8117, "dataset_name": "fake_news"}, "test": {"name": "test", "num_bytes": 21187575, "num_examples": 8117, "dataset_name": "fake_news"}}, "download_checksums": null, "download_size": 64821337, "post_processing_size": null, "dataset_size": 105253202, "size_in_bytes": 170074539}}
 
 
test.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:dbaf293d01dd6c8ce1848a4d14ef090cf5a41b88d360e73d750906b699c26302
3
- size 20919098
 
 
 
 
train.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:85490f7a8063877d8045f7a1b15f099578ef040bc98549289994433d3e2bb5cc
3
- size 63286666