File size: 2,492 Bytes
bf1e696
 
 
 
 
 
 
 
 
 
 
 
 
 
774d450
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: cc-by-4.0
task_categories:
- text-generation
language:
- fa
tags:
- persian
- corpus
- poem
- text
pretty_name: Ganjoor - Persian Poem Corpus
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name

<!-- Provide a quick summary of the dataset. -->

This is the csv format of the Ganjoor Database that is published in their [github](https://github.com/ganjoor/desktop/releases/tag/v2.96)

## Dataset Details

- **Curated by:** Navid Abbaspoor
- **Language(s) (NLP):** Persian (Farsi)
- **License:** Creative Commons Attribution 4.0 International (cc-by-4.0)

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

This dataset contains almost all of poems by Iran's great poets through many many past years till now. The original database was tabular, that I convert it to a csv format that contains these columns:

- id: id of the poem in the original database
- poem: name of the poem
- poet: name of the poet
- cat: category of the poem
- text: text and verses of the poem



### Dataset Sources [optional]

<!-- Provide the basic links for the dataset. -->

- **Repository:** [More Information Needed]

## Usage

<!-- Address questions around how the dataset is intended to be used. -->
```python
from datasets import load_dataset
dataset = load_dataset("mabidan/ganjoor")
```

## Dataset Creation

Persian language is one of the low-resource languages in NLP tasks.
I'm trying to collect persian datasets to help the community to build helpful tools for this language.
So I'm usually collect persian data from web and transform it to a structure that is convinient for my tasks in NLP.


## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

I didnt do any edit or modification in text and verses. So all the data is comming from the original Ganjoor database.


### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
I double checked the data and found that some of poems have no text, I guess there might be a problem in the preprocessing steps or in the actual database. You might want to drop that rows from data if you want to use the dataset.

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

## Dataset Card Contact

For any question or colaboration you can contact me: [email protected]