parquet-converter commited on
Commit
f5a97ab
1 Parent(s): 49db1aa

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,39 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
38
- UltimateArabic.csv filter=lfs diff=lfs merge=lfs -text
39
- UltimateArabicPrePros.csv filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,136 +0,0 @@
1
-
2
- # Dataset Card for [Dataset Name]
3
-
4
- ## Table of Contents
5
- - [Table of Contents](#table-of-contents)
6
- - [Dataset Description](#dataset-description)
7
- - [Dataset Summary](#dataset-summary)
8
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
9
- - [Languages](#languages)
10
- - [Dataset Structure](#dataset-structure)
11
- - [Data Instances](#data-instances)
12
- - [Data Fields](#data-fields)
13
- - [Data Splits](#data-splits)
14
- - [Dataset Creation](#dataset-creation)
15
- - [Curation Rationale](#curation-rationale)
16
- - [Source Data](#source-data)
17
- - [Annotations](#annotations)
18
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
19
- - [Considerations for Using the Data](#considerations-for-using-the-data)
20
- - [Social Impact of Dataset](#social-impact-of-dataset)
21
- - [Discussion of Biases](#discussion-of-biases)
22
- - [Other Known Limitations](#other-known-limitations)
23
- - [Additional Information](#additional-information)
24
- - [Dataset Curators](#dataset-curators)
25
- - [Licensing Information](#licensing-information)
26
- - [Citation Information](#citation-information)
27
- - [Contributions](#contributions)
28
-
29
- ## Dataset Description
30
-
31
- - **Homepage:**
32
- - **Repository:**
33
- - **Paper:**
34
- - **Leaderboard:**
35
- - **Point of Contact:**
36
-
37
- ### Dataset Summary
38
-
39
- The Ultimate Arabic News Dataset is a collection of single-label modern Arabic texts that are used in news websites and press articles.
40
-
41
- Arabic news data was collected by web scraping techniques from many famous news sites such as Al-Arabiya, Al-Youm Al-Sabea (Youm7), the news published on the Google search engine and other various sources.
42
-
43
-
44
- ### Supported Tasks and Leaderboards
45
-
46
- [More Information Needed]
47
-
48
- ### Languages
49
-
50
- [More Information Needed]
51
-
52
- ## Dataset Structure
53
-
54
- ### Data Instances
55
-
56
- [More Information Needed]
57
-
58
- ### Data Fields
59
-
60
- [More Information Needed]
61
-
62
- ### Data Splits
63
-
64
- [More Information Needed]
65
-
66
- ## Dataset Creation
67
-
68
- ### Curation Rationale
69
-
70
- [More Information Needed]
71
-
72
- ### Source Data
73
-
74
- #### Initial Data Collection and Normalization
75
-
76
- [More Information Needed]
77
-
78
- #### Who are the source language producers?
79
-
80
- [More Information Needed]
81
-
82
- ### Annotations
83
-
84
- #### Annotation process
85
-
86
- [More Information Needed]
87
-
88
- #### Who are the annotators?
89
-
90
- [More Information Needed]
91
-
92
- ### Personal and Sensitive Information
93
-
94
- [More Information Needed]
95
-
96
- ## Considerations for Using the Data
97
-
98
- ### Social Impact of Dataset
99
-
100
- [More Information Needed]
101
-
102
- ### Discussion of Biases
103
-
104
- [More Information Needed]
105
-
106
- ### Other Known Limitations
107
-
108
- [More Information Needed]
109
-
110
- ## Additional Information
111
-
112
- ### Dataset Curators
113
-
114
- [More Information Needed]
115
-
116
- ### Licensing Information
117
-
118
- license: cc-by-4.0
119
-
120
-
121
- ### Citation Information
122
- ```
123
- @book{url,
124
- author = {Al-Dulaimi, Ahmed Hashim},
125
- year = {2022},
126
- month = {05},
127
- website = {Mendeley Data, V1},
128
- title = {Ultimate Arabic News Dataset},
129
- doi = {10.17632/jz56k5wxz7.1}
130
- }
131
-
132
- ```
133
-
134
- ### Contributions
135
-
136
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
UltimateArabic/ultimate_arabic_news-train-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c48e172035f62a228ce1cb990b35af5b6f39b64c38fbc999368d188e80c8cc03
3
+ size 235993938
UltimateArabic/ultimate_arabic_news-train-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb9b766fd83de36ef64f156d8d5cbcc9ccfcbd59ad6cb69fd025ab582f119e1f
3
+ size 33490508
UltimateArabicPrePros/ultimate_arabic_news-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cb8ce6f4bf5bd1b3f91a7a7f0ee540916b1c49e7910aa49031798f940807431
3
+ size 234090096
ultimate_arabic_news.py DELETED
@@ -1,108 +0,0 @@
1
- import csv
2
- import datasets
3
- import os
4
- import textwrap
5
-
6
- _DESCRIPTION = " The Ultimate Arabic News Dataset is a collection of single-label modern Arabic texts that are used in news websites and press articles. Arabic news data was collected by web scraping techniques from many famous news sites such as Al-Arabiya, Al-Youm Al-Sabea (Youm7), the news published on the Google search engine and other various sources."
7
-
8
- _CITATION = "Al-Dulaimi, Ahmed Hashim (2022), “Ultimate Arabic News Dataset”, Mendeley Data, V1, doi: 10.17632/jz56k5wxz7.1"
9
-
10
- _HOMEPAGE = "https://data.mendeley.com/datasets/jz56k5wxz7/1"
11
-
12
- _LICENSE = "CC BY 4.0 "
13
-
14
- _URL = {"UltimateArabic":"https://data.mendeley.com/public-files/datasets/jz56k5wxz7/files/b7ca9d26-ed76-4481-bc61-cca9c90178a0/file_downloaded","UltimateArabicPrePros":"https://data.mendeley.com/public-files/datasets/jz56k5wxz7/files/a0bf3c0f-90a5-421f-874f-65e58bf2b977/file_downloaded"}
15
-
16
-
17
- class UAN_Config(datasets.BuilderConfig):
18
-
19
- """BuilderConfig for Ultamte Arabic News"""
20
-
21
- def __init__(self, **kwargs):
22
- """
23
- Args:
24
- **kwargs: keyword arguments forwarded to super.
25
- """
26
- super(UAN_Config, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
27
-
28
-
29
- class Ultimate_Arabic_News(datasets.GeneratorBasedBuilder):
30
- VERSION = datasets.Version("1.1.0")
31
- BUILDER_CONFIGS = [
32
- UAN_Config(
33
- name="UltimateArabic",
34
- description=textwrap.dedent(
35
- """\
36
- UltimateArabic: A file containing more than 193,000 original Arabic news texts, without pre-processing. The texts contain words,
37
- numbers, and symbols that can be removed using pre-processing to increase accuracy when using the dataset in various Arabic natural
38
- language processing tasks such as text classification."""
39
- ),
40
- ),
41
- UAN_Config(
42
- name="UltimateArabicPrePros",
43
- description=textwrap.dedent(
44
- """UltimateArabicPrePros: It is a file that contains the data mentioned in the first file, but after pre-processing, where
45
- the number of data became about 188,000 text documents, where stop words, non-Arabic words, symbols and numbers have been
46
- removed so that this file is ready for use directly in the various Arabic natural language processing tasks. Like text
47
- classification.
48
- """
49
- ),
50
- ),
51
- ]
52
-
53
- def _info(self):
54
- # TODO(tydiqa): Specifies the datasets.DatasetInfo object
55
- return datasets.DatasetInfo(
56
- # This is the description that will appear on the datasets page.
57
- description=_DESCRIPTION,
58
- # datasets.features.FeatureConnectors
59
- features=datasets.Features(
60
- {
61
-
62
- "text": datasets.Value("string"),
63
- "label": datasets.Value("string"),
64
- },
65
-
66
- ),
67
- # If there's a common (input, target) tuple from the features,
68
- # specify them here. They'll be used if as_supervised=True in
69
- # builder.as_dataset.
70
- supervised_keys=None,
71
- # Homepage of the dataset for documentation
72
- homepage="https://data.mendeley.com/datasets/jz56k5wxz7/1",
73
- citation=_CITATION,
74
- )
75
-
76
-
77
- def _split_generators(self, dl_manager):
78
- """Returns SplitGenerators."""
79
- # TODO(tydiqa): Downloads the data and defines the splits
80
- # dl_manager is a datasets.download.DownloadManager that can be used to
81
- # download and extract URLs
82
- UltAr_downloaded = dl_manager.download_and_extract(_URL['UltimateArabic'])
83
- UltArPre_downloaded = dl_manager.download_and_extract(_URL['UltimateArabicPrePros'])
84
- if self.config.name == "UltimateArabic":
85
- return [
86
- datasets.SplitGenerator(
87
- name=datasets.Split.TRAIN,
88
- # These kwargs will be passed to _generate_examples
89
- gen_kwargs={"csv_file": UltAr_downloaded},
90
- ),
91
- ]
92
- elif self.config.name == "UltimateArabicPrePros":
93
- return [
94
- datasets.SplitGenerator(
95
- name=datasets.Split.TRAIN,
96
- # These kwargs will be passed to _generate_examples
97
- gen_kwargs={"csv_file": UltArPre_downloaded},
98
- ),
99
-
100
- ]
101
-
102
- def _generate_examples(self, csv_file):
103
- with open(csv_file, encoding="utf-8") as f:
104
- data = csv.DictReader(f)
105
- for row, item in enumerate(data):
106
- yield row, {"text": item['text'],"label": item['label']}
107
-
108
-