SugoiLoki commited on
Commit
f876176
1 Parent(s): 1f6df18

Update README.md

Browse files

One-to-One Korean-English Parallel Sentence Dataset
This dataset contains a collection of parallel sentences in Korean and English. It has been compiled from multiple sources to support training and evaluation of machine translation models.

Dataset Overview
Size: 74,000 sentence pairs (Korean-English)
Columns:
kor_sent: Korean sentence
eng_sent: English sentence
Purpose: The dataset is designed to help train machine translation models for Korean to English and vice versa.
Dataset Sources
This dataset is constructed from multiple publicly available parallel corpora:

PAWS-X (Korean & English): The PAWS-X dataset is designed for cross-lingual sentence pair classification and includes high-quality pairs of sentences in various languages, including Korean and English.

OPUS-100 (Korean-English): This dataset is part of the OPUS collection of multilingual parallel corpora and is used for machine translation research.

Sentence-Transformer Parallel En-Ko with Similarity: A parallel corpus used for training models in sentence embedding and similarity tasks, focusing on English and Korean.

IWSLT2017 (Korean-English): The IWSLT 2017 dataset provides high-quality sentence pairs collected from spoken language datasets used for translation and interpretation research.

Each dataset brings a unique style and variation of parallel sentences, making this collection highly diverse and applicable for various NLP and machine translation tasks.

File Format
The dataset is provided as a CSV file, where each row contains:

A sentence in Korean in the kor_sent column.
The corresponding English sentence in the eng_sent column.

Training Models
This dataset is ideal for training machine translation models, such as those using mBART-50, BERT, or any other multilingual model. You can use this data for training, fine-tuning, or evaluating a translation model that converts between English and Korean sentences.

Data Quality
Length of Sentences: The dataset contains a mix of short and long sentences, making it useful for various training purposes.
Text Cleaning: Text has been cleaned to remove unnecessary characters and formatting.
Language: The dataset focuses on conversational and general text, so it may not contain specialized terminology.
Ethical Considerations
While the data has been sourced from publicly available datasets, it’s important to recognize that:

The dataset should not be used to generate harmful, abusive, or unethical content.
Use of this dataset must comply with the terms and conditions of the original dataset

Files changed (1) hide show
  1. README.md +36 -28
README.md CHANGED
@@ -1,28 +1,36 @@
1
- ---
2
- license: apache-2.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- dataset_info:
9
- features:
10
- - name: kor_sent
11
- dtype: string
12
- - name: eng_sent
13
- dtype: string
14
- - name: source
15
- dtype: string
16
- - name: similarity
17
- dtype: float64
18
- - name: from
19
- dtype: string
20
- - name: __index_level_0__
21
- dtype: float64
22
- splits:
23
- - name: train
24
- num_bytes: 784539402
25
- num_examples: 3332436
26
- download_size: 374217193
27
- dataset_size: 784539402
28
- ---
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: train
7
+ path: data/train-*
8
+ dataset_info:
9
+ features:
10
+ - name: kor_sent
11
+ dtype: string
12
+ - name: eng_sent
13
+ dtype: string
14
+ - name: source
15
+ dtype: string
16
+ - name: similarity
17
+ dtype: float64
18
+ - name: from
19
+ dtype: string
20
+ - name: __index_level_0__
21
+ dtype: float64
22
+ splits:
23
+ - name: train
24
+ num_bytes: 784539402
25
+ num_examples: 3332436
26
+ download_size: 374217193
27
+ dataset_size: 784539402
28
+ task_categories:
29
+ - translation
30
+ language:
31
+ - en
32
+ - ko
33
+ pretty_name: loki
34
+ size_categories:
35
+ - 1M<n<10M
36
+ ---