Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,107 @@ dataset_info:
|
|
34 |
num_examples: 51275
|
35 |
download_size: 593012664
|
36 |
dataset_size: 1115284701
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
---
|
38 |
# Dataset Card for "uzbek_news"
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
num_examples: 51275
|
35 |
download_size: 593012664
|
36 |
dataset_size: 1115284701
|
37 |
+
task_categories:
|
38 |
+
- text-classification
|
39 |
+
- fill-mask
|
40 |
+
- text-generation
|
41 |
+
language:
|
42 |
+
- uz
|
43 |
+
tags:
|
44 |
+
- uz
|
45 |
+
- news
|
46 |
+
pretty_name: UzbekTextClassification
|
47 |
+
size_categories:
|
48 |
+
- 100K<n<1M
|
49 |
---
|
50 |
# Dataset Card for "uzbek_news"
|
51 |
|
52 |
+
## Table of Contents
|
53 |
+
- [Dataset Description](#dataset-description)
|
54 |
+
- [Dataset Summary](#dataset-summary)
|
55 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
56 |
+
- [Languages](#languages)
|
57 |
+
- [Dataset Structure](#dataset-structure)
|
58 |
+
- [Data Instances](#data-instances)
|
59 |
+
- [Data Fields](#data-fields)
|
60 |
+
- [Data Splits](#data-splits)
|
61 |
+
- [Dataset Creation](#dataset-creation)
|
62 |
+
- [Curation Rationale](#curation-rationale)
|
63 |
+
- [Source Data](#source-data)
|
64 |
+
- [Annotations](#annotations)
|
65 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
66 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
67 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
68 |
+
- [Discussion of Biases](#discussion-of-biases)
|
69 |
+
- [Other Known Limitations](#other-known-limitations)
|
70 |
+
- [Additional Information](#additional-information)
|
71 |
+
- [Dataset Curators](#dataset-curators)
|
72 |
+
- [Licensing Information](#licensing-information)
|
73 |
+
- [Citation Information](#citation-information)
|
74 |
+
- [Contributions](#contributions)
|
75 |
+
|
76 |
+
## Dataset Description
|
77 |
+
|
78 |
+
- **Homepage:** [http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html)
|
79 |
+
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
80 |
+
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
81 |
+
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
82 |
+
- **Size of downloaded dataset files:** 593 MB
|
83 |
+
- **Size of the generated dataset:** 522 MB
|
84 |
+
- **Total amount of disk used:** 1115 MB
|
85 |
+
|
86 |
+
### Dataset Summary
|
87 |
+
|
88 |
+
Multi-label text classification dataset for Uzbek language and some sourcode for analysis. This repository contains the code and dataset used for text classification analysis for the Uzbek language. The dataset consists text data from 9 Uzbek news websites and press portals that included news articles and press releases. These websites were selected to cover various categories such as politics, sports, entertainment, technology, and others. In total, we collected 512,750 articles with over 120 million words accross 15 distinct categories, which provides a large and diverse corpus for text classification. It is worth noting that all the text in the corpus is written in the Latin script.
|
89 |
+
Please refer to our [paper](https://arxiv.org/pdf/2302.14494) and [GitHub repository](https://github.com/elmurod1202/TextClassification) for further details.
|
90 |
+
|
91 |
+
Disclaimer: The team releasing UzTextClassification did not write this model card. This is HuggingFace version of the dataset that is created for mainly easy to access usage. The original dataset files can be accessed and downloaded from https://doi.org/10.5281/zenodo.7677431
|
92 |
+
|
93 |
+
## Dataset Structure
|
94 |
+
|
95 |
+
### Data Instances
|
96 |
+
|
97 |
+
#### default
|
98 |
+
|
99 |
+
- **Size of downloaded dataset files:** 593 MB
|
100 |
+
- **Size of the generated dataset:** 522 MB
|
101 |
+
- **Total amount of disk used:** 1115 MB
|
102 |
+
|
103 |
+
An example of 'train' looks as follows.
|
104 |
+
```
|
105 |
+
{
|
106 |
+
"label": 14,
|
107 |
+
"text": "Samsung Galaxy S21 Ultra eng yaxshi kamerofonlar reytingida 17-o‘rinni egalladi DxOMark laboratoriyasi mutaxassislari Samsung Galaxy S21 Ultra’ning asosiy ..."
|
108 |
+
}
|
109 |
+
```
|
110 |
+
|
111 |
+
### Data Fields
|
112 |
+
|
113 |
+
The data fields are the same among all splits.
|
114 |
+
|
115 |
+
#### default
|
116 |
+
- `text`: a `string` feature.
|
117 |
+
- `label`: a classification label, with possible values including 'Avto' (0), 'Ayollar' (1), 'Dunyo' (2), 'Foto' (3), 'Iqtisodiyot' (4), 'Jamiyat' (5), 'Jinoyat' (6), 'Madaniyat' (7), 'O‘zbekiston' (8), 'Pazandachilik' (9), 'Qonunchilik' (10), 'Salomatlik' (11), 'Siyosat' (12), 'Sport' (13), 'Texnologiya' (14).
|
118 |
+
|
119 |
+
### Data Splits
|
120 |
+
|
121 |
+
| name |train |validation|test|
|
122 |
+
|-------|-----:|---------:|---:|
|
123 |
+
|default|410200|51275|51275|
|
124 |
+
|
125 |
+
### Citation Information
|
126 |
+
|
127 |
+
```
|
128 |
+
@proceedings{kuriyozov_elmurod_2023_7677431,
|
129 |
+
title = {{Text classification dataset and analysis for Uzbek
|
130 |
+
language}},
|
131 |
+
year = 2023,
|
132 |
+
publisher = {Zenodo},
|
133 |
+
month = feb,
|
134 |
+
doi = {10.5281/zenodo.7677431},
|
135 |
+
url = {https://doi.org/10.5281/zenodo.7677431}
|
136 |
+
}
|
137 |
+
```
|
138 |
+
|
139 |
+
### Contact
|
140 |
+
For any questions or issues related to the dataset or code, please contact [[email protected], [email protected]].
|