metadata
license: apache-2.0
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3510227581
num_examples: 836407
download_size: 1801797299
dataset_size: 3510227581
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Data sources come from the following categories: 1.Web crawler dataset:
- Website UET (ĐH Công nghệ): tuyensinh.uet.vnu.edu.vn; new.uet.vnu.edu.vn
- Website HUS (ĐH KHTN): hus.vnu.edu.vn
- Website EUB (ĐH Kinh tế): ueb.vnu.edu.vn
- Website IS (ĐH Quốc tế): is.vnu.edu.vn
- Website Eduacation (ĐH Giáo dục): education.vnu.edu.vn
- Website NXB ĐHQG: press.vnu.edu.vn
List domain web crawler
- CC100:
link to CC100 vi - Vietnews:
link to bk vietnews dataset - C4_vi:
link to C4_vi
Folder Toxic store files demo for toxic filtering.We filtered C4_validation dataset, vietnews samples dataset and a part(1/50) of CC100_vi dataset.After the process, datasets split into nontoxic part and toxic part.
Folder Dedup store files after the deduplication process of above files.
Folder Toxic_2, Dedup_2, Tokenized_2 is the result of the second process that we executed on 17 files of C4_vi dataset wich contains 1b tokens