File size: 6,445 Bytes
79988b9 c8ee316 9d5ee85 79988b9 9d5ee85 3a5da00 9d5ee85 3a5da00 9d5ee85 91c3aff 9d5ee85 91c3aff 9d5ee85 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: news
path: data/news-*
- split: telegram_blogs
path: data/telegram_blogs-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: source
dtype: string
splits:
- name: news
num_bytes: 3272404822
num_examples: 964268
- name: telegram_blogs
num_bytes: 248666870
num_examples: 227337
download_size: 1581389108
dataset_size: 3521071692
annotations_creators:
- no-annotation
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
multilinguality:
- monolingual
language:
- uz
size_categories:
- 1M<n<10M
pretty_name: UzCrawl
tags:
- uz
- crawl
- telegram_blogs
---
# Dataset Card for UzCrawl
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://tahrirchi.uz/grammatika-tekshiruvi](https://tahrirchi.uz/grammatika-tekshiruvi)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.52 GB
- **Size of the generated dataset:** 1.58 GB
- **Total amount of disk used:** 5.1 GB
### Dataset Summary
In an effort to democratize research on low-resource languages, we release UzCrawl dataset, a web and telegram crawl corpus consisting of materials from nearly 1.2 million unique sources in Uzbek Language.
Please refer to our [blogpost](https://tahrirchi.uz/grammatika-tekshiruvi) and paper (Coming soon!) for further details.
To load and use dataset, run this script:
```python
from datasets import load_dataset
uz_crawl=load_dataset("tahrirchi/uz-crawl")
```
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 3.52 GB
- **Size of the generated dataset:** 1.58 GB
- **Total amount of disk used:** 5.1 GB
An example of 'news' looks as follows.
```
{
'text': "O‘zbekiston Respublikasi Vazirlar Mahkamasining 2019 yil 24 iyuldagi 620-son qarori bilan tasdiqlangan «Xorijiy davlatlarda ta'lim olganlik to‘g‘risidagi hujjatlarni tan olish tartibi to‘g‘risida»gi Nizom ijrosini ta'minlash maqsadida Ta'lim sifatini nazorat qilish davlat inspeksiyasida (Toshkent shahar, Chilonzor tumani, Nurxon ko‘chasi, 21-uy) 2019 yil 9 –14 sentabr kunlari sohalar bo‘yicha sinov testlari bo‘lib o‘tishi rejalashtirilgan.\nTa'lim sifatini nazorat qilish davlat inspeksiyasi matbuot xizmati xabariga\xa0ko‘ra, «Huquqshunoslik», «Sog‘liqni saqlash va ijtimoiy ta'minot», «Iqtisodiyot», «Qishloq xo‘jaligi, muhandislik, ishlov berish va qurilish» hamda «O‘qituvchilar tayyorlash va pedagogik fanlar» sohalari bo‘yicha sinov testlari o‘tkaziladigan sanasi va sinov testida ishtirok etuvchilar ro‘yxati jadvalga muvofiq belgilanadi.\nTa'lim sifatini nazorat qilish davlat inspeksiyasi ogohlantirishicha, xorijiy davlatlarda ta'lim olganlik to‘g‘risidagi hujjatlarni tan olish uchun belgilangan sinov testlariga o‘z vaqtida kelmagan, sinov testida ishtirok etuvchilar ro‘yxatida mavjud bo‘lmagan talabgorlarga sinovlarga kirishga ruxsat etilmaydi.",
'timestamp': '2019-06-09',
'source': 'https://kun.uz/uz/news/2019/09/06/xorijda-talim-olganlik-togrisidagi-hujjatlarni-tan-olish-uchun-testlar-otkaziladigan-kunlar-malum-boldi'
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature that contains text.
- `timestamp`: a `string` feature that contains timestamp of the material.
- `source`: a `string` feature that contains url of the material.
### Data Splits
| name | |
|-----------------|--------:|
| news | 964268 |
| telegram_blogs | 227337 |
## Dataset Creation
The news portion have been crawled from 21 different websites using [Scrapy](https://scrapy.org/) framework. And telegram_blogs portion is consisted of manually curated texts from 81 high-quality Telegram channels.
## Citation
Please cite this model using the following format:
```
@online{Mamasaidov2023UzBooks,
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
title = {UzCrawl dataset},
year = {2023},
url = {https://huggingface.co/datasets/tahrirchi/uz-crawl},
note = {Accessed: 2023-10-28}, % change this date
urldate = {2023-10-28} % change this date
}
```
## Gratitude
We are thankfull for these awesome organizations and people for help to make it happen:
- [Asadbek Kiyomov](https://www.linkedin.com/in/asadbey): for his works on the beginning of the project.
- [Ilya Gusev](https://github.com/IlyaGusev/): for his advise throughout the process
- [David Dale](https://daviddale.ru): for his advise throughout the process
## Contacts
We believe that this work will inspire all enthusiasts around the world to open the hidden beauty of low resource languages, in particular of Uzbek.
For further development and issues about the dataset, please use [email protected] or [email protected] to contact. |