Datasets:
license: cc0-1.0
size_categories:
- 10B<n<100B
multilinguality:
- monolingual
source_datasets:
- OSCAR (fa)
- CommonCrawl
- Leipzig
- VOA Persian
- Persian poems corpus
- Web to Corpus
- TEP: Tehran English-Persian parallel corpus
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
language:
- fa
pretty_name: Jomleh
Dataset Card for "jomleh"
Dataset Summary
"Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters and no contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
Source Data
The data used to curate Jomleh is taken from the following sources:
- OSCAR (fa)
- CommonCrawl
- Leipzig
- VOA Persian
- Persian poems corpus
- Web to Corpus
- TEP: Tehran English-Persian parallel corpus
Layout and Structure
The dataset is composed of 60 json-line files. As the samples are spread across these files randomly (uniform), the number of samples per each file is not an exact number but generally speaking, there are roughly an equal number of samples per each file.
Each line of a file is a sample formatted in JSON with the following layout:
{
"id": <integer>,
"text": "<A Farsi sentence>",
"source": "<One of the aformentioned sources>"
}
Data curation process
The value of this dataset is its preprocessing of the text. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the final letter of Farsi alphabet "Ye":
It has a standalone shape:
ﯼ
But when surronded with other characters, its middle form is used:
ﯿ
This requirement is taken care of by "fonts' substitution table" feature. Which will help show the correct form of the words. But at the same time, some text don't rely on the fonts and use the specific code defined for the middle form directly. From the reader point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters to type Farsi text. Again, since the two languages share very similar alphabets, one can successfully read a text in Farsi while it's been typed by Arabic characters since they look very similar in shape.
To address these problems, the preprocessing used in Jomle tries its best to map all the different characters that look alike to the Farsi counterpart. This is not an exact science but based on the best effort.
The same could be said about digits and punctuations.
At the end, any character that can be found in the Jomleh dataset is either:
- a Farsi alphabet letter (
آ
toی
) - a Farsi digit (
۱
to۰
) - a Zero-width non-joiner (
\u200c
) - a Space
- a Dot/period (
.
) - an exclamation mark (
!
) - a Farsi question mark (
؟
) - a Farsi comma (
،
)
Any other character found in the text is eliminated based on best effort and if the elimination of such characters could harm the integrity and the meaning of the sentence, then that sentence is removed from the dataset altogether.