Datasets:
yelp
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Files changed (1) hide show
  1. README.md +4 -235
README.md CHANGED
@@ -1,235 +1,4 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - other
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 100K<n<1M
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text-classification
18
- task_ids:
19
- - sentiment-classification
20
- pretty_name: YelpReviewFull
21
- license_details: yelp-licence
22
- dataset_info:
23
- config_name: yelp_review_full
24
- features:
25
- - name: label
26
- dtype:
27
- class_label:
28
- names:
29
- '0': 1 star
30
- '1': 2 star
31
- '2': 3 stars
32
- '3': 4 stars
33
- '4': 5 stars
34
- - name: text
35
- dtype: string
36
- splits:
37
- - name: train
38
- num_bytes: 483811554
39
- num_examples: 650000
40
- - name: test
41
- num_bytes: 37271188
42
- num_examples: 50000
43
- download_size: 322952369
44
- dataset_size: 521082742
45
- configs:
46
- - config_name: yelp_review_full
47
- data_files:
48
- - split: train
49
- path: yelp_review_full/train-*
50
- - split: test
51
- path: yelp_review_full/test-*
52
- default: true
53
- train-eval-index:
54
- - config: yelp_review_full
55
- task: text-classification
56
- task_id: multi_class_classification
57
- splits:
58
- train_split: train
59
- eval_split: test
60
- col_mapping:
61
- text: text
62
- label: target
63
- metrics:
64
- - type: accuracy
65
- name: Accuracy
66
- - type: f1
67
- name: F1 macro
68
- args:
69
- average: macro
70
- - type: f1
71
- name: F1 micro
72
- args:
73
- average: micro
74
- - type: f1
75
- name: F1 weighted
76
- args:
77
- average: weighted
78
- - type: precision
79
- name: Precision macro
80
- args:
81
- average: macro
82
- - type: precision
83
- name: Precision micro
84
- args:
85
- average: micro
86
- - type: precision
87
- name: Precision weighted
88
- args:
89
- average: weighted
90
- - type: recall
91
- name: Recall macro
92
- args:
93
- average: macro
94
- - type: recall
95
- name: Recall micro
96
- args:
97
- average: micro
98
- - type: recall
99
- name: Recall weighted
100
- args:
101
- average: weighted
102
- ---
103
- ---
104
-
105
- # Dataset Card for YelpReviewFull
106
-
107
- ## Table of Contents
108
- - [Dataset Description](#dataset-description)
109
- - [Dataset Summary](#dataset-summary)
110
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
111
- - [Languages](#languages)
112
- - [Dataset Structure](#dataset-structure)
113
- - [Data Instances](#data-instances)
114
- - [Data Fields](#data-fields)
115
- - [Data Splits](#data-splits)
116
- - [Dataset Creation](#dataset-creation)
117
- - [Curation Rationale](#curation-rationale)
118
- - [Source Data](#source-data)
119
- - [Annotations](#annotations)
120
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
121
- - [Considerations for Using the Data](#considerations-for-using-the-data)
122
- - [Social Impact of Dataset](#social-impact-of-dataset)
123
- - [Discussion of Biases](#discussion-of-biases)
124
- - [Other Known Limitations](#other-known-limitations)
125
- - [Additional Information](#additional-information)
126
- - [Dataset Curators](#dataset-curators)
127
- - [Licensing Information](#licensing-information)
128
- - [Citation Information](#citation-information)
129
- - [Contributions](#contributions)
130
-
131
- ## Dataset Description
132
-
133
- - **Homepage:** [Yelp](https://www.yelp.com/dataset)
134
- - **Repository:** [Crepe](https://github.com/zhangxiangxiao/Crepe)
135
- - **Paper:** [Character-level Convolutional Networks for Text Classification](https://arxiv.org/abs/1509.01626)
136
- - **Point of Contact:** [Xiang Zhang](mailto:[email protected])
137
-
138
- ### Dataset Summary
139
-
140
- The Yelp reviews dataset consists of reviews from Yelp.
141
- It is extracted from the Yelp Dataset Challenge 2015 data.
142
-
143
- ### Supported Tasks and Leaderboards
144
-
145
- - `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment.
146
-
147
- ### Languages
148
-
149
- The reviews were mainly written in english.
150
-
151
- ## Dataset Structure
152
-
153
- ### Data Instances
154
-
155
- A typical data point, comprises of a text and the corresponding label.
156
-
157
- An example from the YelpReviewFull test set looks as follows:
158
- ```
159
- {
160
- 'label': 0,
161
- 'text': 'I got \'new\' tires from them and within two weeks got a flat. I took my car to a local mechanic to see if i could get the hole patched, but they said the reason I had a flat was because the previous patch had blown - WAIT, WHAT? I just got the tire and never needed to have it patched? This was supposed to be a new tire. \\nI took the tire over to Flynn\'s and they told me that someone punctured my tire, then tried to patch it. So there are resentful tire slashers? I find that very unlikely. After arguing with the guy and telling him that his logic was far fetched he said he\'d give me a new tire \\"this time\\". \\nI will never go back to Flynn\'s b/c of the way this guy treated me and the simple fact that they gave me a used tire!'
162
- }
163
- ```
164
-
165
- ### Data Fields
166
-
167
- - 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
168
- - 'label': Corresponds to the score associated with the review (between 1 and 5).
169
-
170
- ### Data Splits
171
-
172
- The Yelp reviews full star dataset is constructed by randomly taking 130,000 training samples and 10,000 testing samples for each review star from 1 to 5.
173
- In total there are 650,000 trainig samples and 50,000 testing samples.
174
-
175
- ## Dataset Creation
176
-
177
- ### Curation Rationale
178
-
179
- The Yelp reviews full star dataset is constructed by Xiang Zhang ([email protected]) from the Yelp Dataset Challenge 2015. It is first used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
180
-
181
- ### Source Data
182
-
183
- #### Initial Data Collection and Normalization
184
-
185
- [More Information Needed]
186
-
187
- #### Who are the source language producers?
188
-
189
- [More Information Needed]
190
-
191
- ### Annotations
192
-
193
- #### Annotation process
194
-
195
- [More Information Needed]
196
-
197
- #### Who are the annotators?
198
-
199
- [More Information Needed]
200
-
201
- ### Personal and Sensitive Information
202
-
203
- [More Information Needed]
204
-
205
- ## Considerations for Using the Data
206
-
207
- ### Social Impact of Dataset
208
-
209
- [More Information Needed]
210
-
211
- ### Discussion of Biases
212
-
213
- [More Information Needed]
214
-
215
- ### Other Known Limitations
216
-
217
- [More Information Needed]
218
-
219
- ## Additional Information
220
-
221
- ### Dataset Curators
222
-
223
- [More Information Needed]
224
-
225
- ### Licensing Information
226
-
227
- You can check the official [yelp-dataset-agreement](https://s3-media3.fl.yelpcdn.com/assets/srv0/engineering_pages/bea5c1e92bf3/assets/vendor/yelp-dataset-agreement.pdf).
228
-
229
- ### Citation Information
230
-
231
- Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
232
-
233
- ### Contributions
234
-
235
- Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
 
1
+ # Classify-Species-from-the-Orchid-Flowers
2
+ CNN Classification Model to classify Species from the Orchid Flowers
3
+ Build a CNN Classification Model to classify Species from the Orchid Flowers dataset (Use train and validation data for respective purpose, no testing needed)
4
+ https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/0HNECY