--- license: cc0-1.0 size_categories: - 10B, "text": "", "source": "" } ``` ## Data curation process The value of this dataset is its preprocessing of the text. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the final letter of Farsi alphabet "Ye": It has a standalone shape:
But when surronded with other characters, its middle form is used:
ﯿ
This requirement is taken care of by "fonts' substitution table" feature. Which will help show the correct form of the words. But at the same time, some text don't rely on the fonts and use the specific code defined for the middle form directly. From the reader point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters to type Farsi text. Again, since the two languages share very similar alphabets, one can successfully read a text in Farsi while it's been typed by Arabic characters since they look very similar in shape. To address these problems, the preprocessing used in Jomle tries its best to map all the different characters that look alike to the Farsi counterpart. This is not an exact science but based on the best effort. The same could be said about digits and punctuations. At the end, any character that can be found in the Jomleh dataset is either: - a Farsi alphabet letter (`آ` to `ی`) - a Farsi digit (`۱` to `۰`) - a Zero-width non-joiner (`\u200c`) - a Space - a Dot/period (`.`) - an exclamation mark (`!`) - a Farsi question mark (`؟`) - a Farsi comma (`،`) Any other character found in the text is eliminated based on best effort and if the elimination of such characters could harm the integrity and the meaning of the sentence, then that sentence is removed from the dataset altogether.