Datasets:
cjvt
/

parquet-converter commited on
Commit
c1f2be7
1 Parent(s): a77ffb4

Update parquet files

Browse files
README.md DELETED
@@ -1,132 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - other
6
- language:
7
- - sl
8
- license:
9
- - cc-by-nc-sa-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 100K<n<1M
14
- - 1K<n<10K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text2text-generation
19
- - other
20
- task_ids: []
21
- pretty_name: solar3
22
- tags:
23
- - grammatical-error-correction
24
- - other-token-classification-of-text-errors
25
- ---
26
-
27
- # Dataset Card for solar3
28
-
29
- ### Dataset Summary
30
-
31
- Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
32
- (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
33
- Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
34
- document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).
35
-
36
- \(*) pronounce "š" as "sh" in "shoe".
37
-
38
- By default the dataset is provided at **sentence-level** (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences.
39
-
40
- There is also an option to aggregate the instances at the **document-level** or **paragraph-level**
41
- by explicitly providing the correct config:
42
- ```
43
- datasets.load_dataset("cjvt/solar3", "paragraph_level")`
44
- datasets.load_dataset("cjvt/solar3", "document_level")`
45
- ```
46
-
47
- ### Supported Tasks and Leaderboards
48
-
49
- Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation.
50
-
51
- ### Languages
52
-
53
- Slovenian.
54
-
55
- ## Dataset Structure
56
-
57
- ### Data Instances
58
-
59
- A sample instance from the dataset:
60
- ```json
61
- {
62
- 'id_doc': 'solar1',
63
- 'doc_title': 'KUS-G-slo-1-GO-E-2009-10001',
64
- 'is_manually_validated': True,
65
- 'src_tokens': ['”', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', '”', ',', 'izreče', 'Antigona', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
66
- 'src_ling_annotations': {
67
- # truncated for conciseness
68
- 'lemma': ['”', 'ne', 'da', 'sovražiti', ...],
69
- 'ana': ['mte:U', 'mte:L', 'mte:Vd', ...],
70
- 'msd': ['UPosTag=PUNCT', 'UPosTag=PART|Polarity=Neg', 'UPosTag=SCONJ', ...],
71
- 'ne_tag': [..., 'O', 'B-PER', 'O', ...],
72
- 'space_after': [False, True, True, False, ...]
73
- },
74
- 'tgt_tokens': ['„', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', ',', '”', 'izreče', 'Antigona', 'sebi', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
75
- # omitted for conciseness, the format is the same as in 'src_ling_annotations'
76
- 'tgt_ling_annotations': {...},
77
- 'corrections': [
78
- {'idx_src': [0], 'idx_tgt': [0], 'corr_types': ['Z/LOČ/nerazvrščeno']},
79
- {'idx_src': [10, 11], 'idx_tgt': [10, 11], 'corr_types': ['Z/LOČ/nerazvrščeno']},
80
- {'idx_src': [], 'idx_tgt': [14], 'corr_types': ['O/KAT/povratnost']}
81
- ]
82
- }
83
- ```
84
-
85
- The instance represents a correction in the document 'solar1' (`id_doc`), which were manually assigned/validated (`is_manually_validated`). More concretely, the source sentence contains three errors (as indicated by three elements in `corrections`):
86
- - a punctuation change: '”' -> '„';
87
- - a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside);
88
- - addition of a new word: 'sebi'.
89
-
90
- ### Data Fields
91
-
92
- - `id_doc`: a string containing the identifying name of the document in which the sentence appears;
93
- - `doc_title`: a string containing the assigned document title;
94
- - `is_manually_validated`: a bool indicating whether the document in which the sentence appears was reviewed by a teacher;
95
- - `src_tokens`: words in the source sentence (`[]` if there is no source sentence);
96
- - `src_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the source tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
97
- - `tgt_tokens`: words in the target sentence (`[]` if there is no target sentence);
98
- - `tgt_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the target tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
99
- - `corrections`: a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved (`idx_src`), target tokens involved (`idx_tgt`), and the categories of the corrections made (`corr_types`). Please note that there can be multiple assigned categories for one annotated correction, in which case `len(corr_types) > 1`.
100
-
101
-
102
- ## Dataset Creation
103
-
104
- The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc.
105
-
106
- Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text.
107
-
108
- ## Additional Information
109
-
110
- ### Dataset Curators
111
-
112
- Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1589 for the full list)
113
-
114
- ### Licensing Information
115
-
116
- CC BY-NC-SA 4.0.
117
-
118
- ### Citation Information
119
-
120
- ```
121
- @misc{solar3,
122
- title = {Developmental corpus {\v S}olar 3.0},
123
- author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
124
- url = {http://hdl.handle.net/11356/1589},
125
- note = {Slovenian language resource repository {CLARIN}.{SI}},
126
- year = {2022}
127
- }
128
- ```
129
-
130
- ### Contributions
131
-
132
- Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "\u0160olar is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools \n(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. \nPart of the corpus (2,094 texts) is annotated with teachers' corrections using a system of labels described in the \ndocument available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).\n", "citation": "@misc{solar3.0,\n title = {Developmental corpus {\u000b S}olar 3.0},\n author = {Arhar Holdt, {\u000b S}pela and Rozman, Tadeja and Stritar Ku{\u000b c}uk, Mojca and Krek, Simon and Krap{\u000b s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\u000b c}, Polona and Laskowski, Cyprian and Kocjan{\u000b c}i{\u000b c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},\n url = {http://hdl.handle.net/11356/1589},\n note = {Slovenian language resource repository {CLARIN}.{SI}},\n year = {2022}\n}\n", "homepage": "http://hdl.handle.net/11356/1589", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "doc_title": {"dtype": "string", "id": null, "_type": "Value"}, "is_manually_validated": {"dtype": "bool", "id": null, "_type": "Value"}, "id_src_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "src_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "id_tgt_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "tgt_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "corrections": [{"idx_src": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "idx_tgt": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "corr_types": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "solar3", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 120154479, "num_examples": 125867, "dataset_name": "solar3"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Solar.TEI.zip": {"num_bytes": 99287852, "checksum": "188945c90c663cc34c77c6aefd40357b60b88436b3d9fd53f24304c927ac1cbf"}}, "download_size": 99287852, "post_processing_size": null, "dataset_size": 120154479, "size_in_bytes": 219442331}, "sentence_level": {"description": "\u0160olar is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools \n(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. \nPart of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the \ndocument available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).\n", "citation": "@misc{solar3.0,\n title = {Developmental corpus {\u000b S}olar 3.0},\n author = {Arhar Holdt, {\u000b S}pela and Rozman, Tadeja and Stritar Ku{\u000b c}uk, Mojca and Krek, Simon and Krap{\u000b s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\u000b c}, Polona and Laskowski, Cyprian and Kocjan{\u000b c}i{\u000b c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},\n url = {http://hdl.handle.net/11356/1589},\n note = {Slovenian language resource repository {CLARIN}.{SI}},\n year = {2022}\n}\n", "homepage": "http://hdl.handle.net/11356/1589", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "doc_title": {"dtype": "string", "id": null, "_type": "Value"}, "is_manually_validated": {"dtype": "bool", "id": null, "_type": "Value"}, "src_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "src_ling_annotations": {"lemma": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ana": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msd": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ne_tag": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "space_after": {"feature": {"dtype": "bool", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "tgt_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "tgt_ling_annotations": {"lemma": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ana": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msd": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ne_tag": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "space_after": {"feature": {"dtype": "bool", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "corrections": [{"idx_src": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "idx_tgt": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "corr_types": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "solar3", "config_name": "sentence_level", "version": {"version_str": "3.0.2", "description": null, "major": 3, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 319216779, "num_examples": 125867, "dataset_name": "solar3"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Solar.TEI.zip": {"num_bytes": 99287852, "checksum": "188945c90c663cc34c77c6aefd40357b60b88436b3d9fd53f24304c927ac1cbf"}}, "download_size": 99287852, "post_processing_size": null, "dataset_size": 319216779, "size_in_bytes": 418504631}, "document_level": {"description": "\u0160olar is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools \n(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. \nPart of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the \ndocument available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).\n", "citation": "@misc{solar3.0,\n title = {Developmental corpus {\u000b S}olar 3.0},\n author = {Arhar Holdt, {\u000b S}pela and Rozman, Tadeja and Stritar Ku{\u000b c}uk, Mojca and Krek, Simon and Krap{\u000b s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\u000b c}, Polona and Laskowski, Cyprian and Kocjan{\u000b c}i{\u000b c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},\n url = {http://hdl.handle.net/11356/1589},\n note = {Slovenian language resource repository {CLARIN}.{SI}},\n year = {2022}\n}\n", "homepage": "http://hdl.handle.net/11356/1589", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "doc_title": {"dtype": "string", "id": null, "_type": "Value"}, "is_manually_validated": {"dtype": "bool", "id": null, "_type": "Value"}, "src_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "src_ling_annotations": {"lemma": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ana": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msd": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ne_tag": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "space_after": {"feature": {"dtype": "bool", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "tgt_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "tgt_ling_annotations": {"lemma": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ana": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msd": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ne_tag": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "space_after": {"feature": {"dtype": "bool", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "corrections": [{"idx_src": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "idx_tgt": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "corr_types": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "solar3", "config_name": "document_level", "version": {"version_str": "3.0.2", "description": null, "major": 3, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 306167538, "num_examples": 5485, "dataset_name": "solar3"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Solar.TEI.zip": {"num_bytes": 99287852, "checksum": "188945c90c663cc34c77c6aefd40357b60b88436b3d9fd53f24304c927ac1cbf"}}, "download_size": 99287852, "post_processing_size": null, "dataset_size": 306167538, "size_in_bytes": 405455390}, "paragraph_level": {"description": "\u0160olar is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools \n(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. \nPart of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the \ndocument available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).\n", "citation": "@misc{solar3.0,\n title = {Developmental corpus {\u000b S}olar 3.0},\n author = {Arhar Holdt, {\u000b S}pela and Rozman, Tadeja and Stritar Ku{\u000b c}uk, Mojca and Krek, Simon and Krap{\u000b s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\u000b c}, Polona and Laskowski, Cyprian and Kocjan{\u000b c}i{\u000b c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},\n url = {http://hdl.handle.net/11356/1589},\n note = {Slovenian language resource repository {CLARIN}.{SI}},\n year = {2022}\n}\n", "homepage": "http://hdl.handle.net/11356/1589", "license": "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "features": {"id_doc": {"dtype": "string", "id": null, "_type": "Value"}, "doc_title": {"dtype": "string", "id": null, "_type": "Value"}, "is_manually_validated": {"dtype": "bool", "id": null, "_type": "Value"}, "src_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "src_ling_annotations": {"lemma": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ana": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msd": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ne_tag": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "space_after": {"feature": {"dtype": "bool", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "tgt_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "tgt_ling_annotations": {"lemma": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ana": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "msd": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ne_tag": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "space_after": {"feature": {"dtype": "bool", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "corrections": [{"idx_src": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "idx_tgt": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "corr_types": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "solar3", "config_name": "paragraph_level", "version": {"version_str": "3.0.2", "description": null, "major": 3, "minor": 0, "patch": 2}, "splits": {"train": {"name": "train", "num_bytes": 309546019, "num_examples": 38345, "dataset_name": "solar3"}}, "download_checksums": {"https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Solar.TEI.zip": {"num_bytes": 99287852, "checksum": "188945c90c663cc34c77c6aefd40357b60b88436b3d9fd53f24304c927ac1cbf"}}, "download_size": 99287852, "post_processing_size": null, "dataset_size": 309546019, "size_in_bytes": 408833871}}
 
 
document_level/solar3-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:953a22a9f7d419a842b91e52bcf2094ab36a0af2674abf213023cf12ab43aa9d
3
+ size 29631663
paragraph_level/solar3-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14b5a2ca9dd31d5273d67ddb1fb9e183794409a494065b6931df6e0b475881c6
3
+ size 33242459
sentence_level/solar3-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:affc7cb443887f1b2a45c1df9b79dafc06ba28dfc96a33ed89857a46d1de5498
3
+ size 39197661
solar3.py DELETED
@@ -1,509 +0,0 @@
1
- import logging
2
- import os
3
- import re
4
- import xml.etree.ElementTree as ET
5
- from copy import deepcopy
6
- from itertools import groupby
7
- from typing import Optional
8
-
9
- import datasets
10
-
11
- _CITATION = """\
12
- @misc{solar3.0,
13
- title = {Developmental corpus {\v S}olar 3.0},
14
- author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
15
- url = {http://hdl.handle.net/11356/1589},
16
- note = {Slovenian language resource repository {CLARIN}.{SI}},
17
- year = {2022}
18
- }
19
- """
20
-
21
- _DESCRIPTION = """\
22
- Šolar is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
23
- (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
24
- Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
25
- document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).
26
- """
27
-
28
- _HOMEPAGE = "http://hdl.handle.net/11356/1589"
29
-
30
- _LICENSE = "Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)"
31
-
32
- _URLS = {
33
- "solar_tei": "https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Solar.TEI.zip"
34
- }
35
-
36
- XML_NAMESPACE = "{http://www.w3.org/XML/1998/namespace}"
37
-
38
-
39
- def namespace(element):
40
- # https://stackoverflow.com/a/12946675
41
- m = re.match(r'\{.*\}', element.tag)
42
- return m.group(0) if m else ''
43
-
44
-
45
- def resolve_element(tag_el, ne_tag: Optional[str] = "O"):
46
- if not tag_el.tag.endswith(("w", "pc", "seg")):
47
- return []
48
-
49
- if tag_el.tag.endswith(("w", "pc")):
50
- form = tag_el.text.strip()
51
- lemma = tag_el.text.strip() if tag_el.tag.endswith("pc") else tag_el.attrib["lemma"]
52
- ana = tag_el.attrib["ana"] # JOS/MTE specifications
53
- msd = tag_el.attrib["msd"] # UD specifications
54
- ret_ne_tag = ne_tag
55
- id_tag = tag_el.attrib[f"{XML_NAMESPACE}id"]
56
- space_after = False if "join" in tag_el.attrib and tag_el.attrib["join"]=="right" else True
57
-
58
- return [(id_tag, form, lemma, ana, msd, ret_ne_tag, space_after)]
59
- # Named entities: words and punctuation nested directly below current element
60
- elif tag_el.tag.endswith("seg"):
61
- anns = []
62
- ret_ne_tag = tag_el.attrib["subtype"].upper()
63
- for idx_child, curr_child in enumerate(tag_el):
64
- anns.extend(resolve_element(curr_child, ne_tag=f"B-{ret_ne_tag}" if idx_child == 0 else f"I-{ret_ne_tag}"))
65
-
66
- return anns
67
-
68
-
69
- def extract_sent_id(tok_id):
70
- # e.g., `extract_sent_id("#solar1s.3.2.44") == "solar1s.3.2"` or `extract_sent_id("solar1s.3.2.44") == "solar1s.3.2"`
71
- _tok_id = tok_id[1:] if tok_id.startswith("#") else tok_id
72
- return ".".join(_tok_id.split(".")[: -1])
73
-
74
-
75
- def find_involved_sents(correction_group_el):
76
- src_sent_ids = set()
77
- tgt_sent_ids = set()
78
- for _curr_corr in correction_group_el:
79
- sent_ids = list(map(lambda _tok_id: extract_sent_id(_tok_id),
80
- _curr_corr.attrib["target"].split(" ")))
81
-
82
- for _s_id in sent_ids:
83
- if "t" in _s_id:
84
- tgt_sent_ids.add(_s_id)
85
- else:
86
- src_sent_ids.add(_s_id)
87
-
88
- return sorted(list(src_sent_ids)), sorted(list(tgt_sent_ids))
89
-
90
-
91
- def read_data(data_path):
92
- data = {} # ID_sent -> sentence_metadata
93
- tree = ET.parse(data_path)
94
- root = tree.getroot()
95
- NAMESPACE = namespace(root)
96
-
97
- for curr_text in root.iterfind(f".//{NAMESPACE}div"):
98
- id_text = curr_text.attrib[f"{XML_NAMESPACE}id"]
99
- bibl_el = curr_text.find(f"{NAMESPACE}bibl")
100
- if bibl_el is None:
101
- text_title = "Unknown_title"
102
- logging.warning(f"The following text does not have a 'bibl' element: {curr_text.attrib}. "
103
- f"Setting title to 'Unknown_title'")
104
- is_manually_validated = False
105
- else:
106
- text_title = bibl_el.attrib["n"]
107
- note_el = bibl_el.find(f"{NAMESPACE}note")
108
- is_manually_validated = note_el.text == "DA"
109
-
110
- for idx_par, curr_par in enumerate(curr_text.iterfind(f".//{NAMESPACE}p")):
111
- for idx_sent, curr_sent in enumerate(curr_par.iterfind(f".//{NAMESPACE}s")):
112
- id_sent = curr_sent.attrib[f"{XML_NAMESPACE}id"]
113
- ids, forms, lemmas, msds, nes, spaces_after = [], [], [], [], [], []
114
- msds_jos, msds_ud = [], []
115
- for curr_el in curr_sent:
116
- curr_annotations = resolve_element(curr_el)
117
- for curr_ann in curr_annotations:
118
- ids.append(curr_ann[0])
119
- forms.append(curr_ann[1])
120
- lemmas.append(curr_ann[2])
121
- msds_jos.append(curr_ann[3])
122
- msds_ud.append(curr_ann[4])
123
- nes.append(curr_ann[5])
124
- spaces_after.append(curr_ann[6])
125
-
126
- data[id_sent] = {
127
- "id_doc": id_text,
128
- "doc_title": text_title,
129
- "idx_par": idx_par,
130
- "id_token": ids, "form": forms, "lemma": lemmas, "ana": msds_jos, "msd": msds_ud, "ne_tag": nes, "space_after": spaces_after,
131
- "is_manually_validated": is_manually_validated
132
- }
133
-
134
- return data
135
-
136
-
137
- class Solar3(datasets.GeneratorBasedBuilder):
138
- """Šolar is a developmental corpus of school texts (e.g., essays), annotated with metadata and (partially)
139
- with teachers' corrections. """
140
-
141
- VERSION = datasets.Version("3.0.2")
142
-
143
- BUILDER_CONFIGS = [
144
- datasets.BuilderConfig(name="sentence_level", version=VERSION,
145
- description="Annotations at sentence-level."),
146
- datasets.BuilderConfig(name="paragraph_level", version=VERSION,
147
- description="Annotations at paragraph-level."),
148
- datasets.BuilderConfig(name="document_level", version=VERSION,
149
- description="Annotations at document-level."),
150
- ]
151
-
152
- DEFAULT_CONFIG_NAME = "sentence_level" # default = annotations as provided in the original data
153
-
154
- def _info(self):
155
- features = datasets.Features(
156
- {
157
- "id_doc": datasets.Value("string"),
158
- "doc_title": datasets.Value("string"),
159
- "is_manually_validated": datasets.Value("bool"),
160
- "src_tokens": datasets.Sequence(datasets.Value("string")),
161
- "src_ling_annotations": {
162
- "lemma": datasets.Sequence(datasets.Value("string")),
163
- "ana": datasets.Sequence(datasets.Value("string")),
164
- "msd": datasets.Sequence(datasets.Value("string")),
165
- "ne_tag": datasets.Sequence(datasets.Value("string")),
166
- "space_after": datasets.Sequence(datasets.Value("bool"))
167
- },
168
- "tgt_tokens": datasets.Sequence(datasets.Value("string")),
169
- "tgt_ling_annotations": {
170
- "lemma": datasets.Sequence(datasets.Value("string")),
171
- "ana": datasets.Sequence(datasets.Value("string")),
172
- "msd": datasets.Sequence(datasets.Value("string")),
173
- "ne_tag": datasets.Sequence(datasets.Value("string")),
174
- "space_after": datasets.Sequence(datasets.Value("bool"))
175
- },
176
- "corrections": [
177
- {
178
- "idx_src": datasets.Sequence(datasets.Value("int32")),
179
- "idx_tgt": datasets.Sequence(datasets.Value("int32")),
180
- "corr_types": datasets.Sequence(datasets.Value("string"))
181
- }
182
- ]
183
- }
184
- )
185
-
186
- return datasets.DatasetInfo(
187
- description=_DESCRIPTION,
188
- features=features,
189
- homepage=_HOMEPAGE,
190
- license=_LICENSE,
191
- citation=_CITATION,
192
- )
193
-
194
- def _split_generators(self, dl_manager):
195
- urls = _URLS["solar_tei"]
196
- data_dir = dl_manager.download_and_extract(urls)
197
-
198
- return [
199
- datasets.SplitGenerator(
200
- name=datasets.Split.TRAIN,
201
- # These kwargs will be passed to _generate_examples
202
- gen_kwargs={
203
- "source_path": os.path.join(data_dir, "Solar.TEI", "solar-orig.xml"),
204
- "target_path": os.path.join(data_dir, "Solar.TEI", "solar-corr.xml"),
205
- "links_path": os.path.join(data_dir, "Solar.TEI", "solar-errs.xml")
206
- }
207
- )
208
- ]
209
-
210
- @staticmethod
211
- def generate_sentences(source_path, target_path, links_path):
212
- source_data = read_data(source_path)
213
- target_data = read_data(target_path)
214
-
215
- data = ET.parse(links_path)
216
- root = data.getroot()
217
- NAMESPACE = namespace(root)
218
-
219
- for idx_corr, corrected_sent in enumerate(root.iterfind(f"{NAMESPACE}linkGrp")):
220
- # Involved sentences according to the IDs of token mappings - 'corresp' does not list all of them!
221
- # (possible bug in data)
222
- involved_src_sents, involved_tgt_sents = find_involved_sents(corrected_sent)
223
-
224
- id_doc, doc_title, is_manually_validated = None, None, False
225
- src_sent_data, tgt_sent_data = {}, {}
226
- tok2position = {}
227
- assert len(involved_src_sents) > 0 or len(involved_tgt_sents) > 0
228
-
229
- if len(involved_src_sents) > 0:
230
- src_sent_data = deepcopy(source_data[involved_src_sents[0]])
231
- if not isinstance(src_sent_data["idx_par"], list):
232
- src_sent_data["idx_par"] = [src_sent_data["idx_par"]]
233
-
234
- for src_sent_id in involved_src_sents[1:]:
235
- curr_sent_data = source_data[src_sent_id]
236
-
237
- src_sent_data["id_token"].extend(curr_sent_data["id_token"])
238
- src_sent_data["idx_par"].append(curr_sent_data["idx_par"])
239
- src_sent_data["form"].extend(curr_sent_data["form"])
240
- src_sent_data["lemma"].extend(curr_sent_data["lemma"])
241
- src_sent_data["ana"].extend(curr_sent_data["ana"])
242
- src_sent_data["msd"].extend(curr_sent_data["msd"])
243
- src_sent_data["ne_tag"].extend(curr_sent_data["ne_tag"])
244
- src_sent_data["space_after"].extend(curr_sent_data["space_after"])
245
-
246
- id_doc = src_sent_data["id_doc"]
247
- doc_title = src_sent_data["doc_title"]
248
- is_manually_validated |= src_sent_data["is_manually_validated"]
249
- for _pos, _tok in enumerate(src_sent_data["id_token"]):
250
- tok2position[_tok] = _pos
251
-
252
- if len(involved_tgt_sents) > 0:
253
- tgt_sent_data = deepcopy(target_data[involved_tgt_sents[0]])
254
- if not isinstance(tgt_sent_data["idx_par"], list):
255
- tgt_sent_data["idx_par"] = [tgt_sent_data["idx_par"]]
256
-
257
- for tgt_sent_id in involved_tgt_sents[1:]:
258
- curr_sent_data = target_data[tgt_sent_id]
259
-
260
- tgt_sent_data["id_token"].extend(curr_sent_data["id_token"])
261
- tgt_sent_data["idx_par"].append(curr_sent_data["idx_par"])
262
- tgt_sent_data["form"].extend(curr_sent_data["form"])
263
- tgt_sent_data["lemma"].extend(curr_sent_data["lemma"])
264
- tgt_sent_data["ana"].extend(curr_sent_data["ana"])
265
- tgt_sent_data["msd"].extend(curr_sent_data["msd"])
266
- tgt_sent_data["ne_tag"].extend(curr_sent_data["ne_tag"])
267
- tgt_sent_data["space_after"].extend(curr_sent_data["space_after"])
268
-
269
- id_doc = tgt_sent_data["id_doc"]
270
- doc_title = tgt_sent_data["doc_title"]
271
- is_manually_validated |= tgt_sent_data["is_manually_validated"]
272
- for _pos, _tok in enumerate(tgt_sent_data["id_token"]):
273
- tok2position[_tok] = _pos
274
-
275
- corr_data = []
276
- for token_info in corrected_sent.findall(f"{NAMESPACE}link"):
277
- connections = token_info.attrib["target"].split(" ")
278
-
279
- corrections = token_info.attrib["type"]
280
- if corrections == "ID":
281
- continue
282
-
283
- src_inds, tgt_inds = [], []
284
- corr_types = []
285
- for curr_corr in corrections.split("|"):
286
- corr_types.append(curr_corr)
287
-
288
- for curr_tok in connections:
289
- # Token IDs have an index at the end, but it is 1-based; convert it to 0-based
290
- idx_tok = tok2position[curr_tok[1:]]
291
- if "t" in curr_tok: # target token
292
- tgt_inds.append(idx_tok)
293
- else: # source token
294
- src_inds.append(idx_tok)
295
-
296
- corr_data.append({"idx_src": src_inds, "idx_tgt": tgt_inds, "corr_types": corr_types})
297
-
298
- yield idx_corr, {
299
- "id_doc": id_doc[:-1], # doc ID without the "s" or "t" info
300
- "doc_title": doc_title,
301
- "is_manually_validated": is_manually_validated,
302
- "idx_src_par": src_sent_data.get("idx_par", []),
303
- "id_src_tokens": src_sent_data.get("id_token", []),
304
- "src_tokens": src_sent_data.get("form", []),
305
- "src_ling_annotations": {
306
- "lemma": src_sent_data.get("lemma", []),
307
- "ana": src_sent_data.get("ana", []),
308
- "msd": src_sent_data.get("msd", []),
309
- "ne_tag": src_sent_data.get("ne_tag", []),
310
- "space_after": src_sent_data.get("space_after", [])
311
- },
312
- "idx_tgt_par": tgt_sent_data.get("idx_par", []),
313
- "id_tgt_tokens": tgt_sent_data.get("id_token", []),
314
- "tgt_tokens": tgt_sent_data.get("form", []),
315
- "tgt_ling_annotations": {
316
- "lemma": tgt_sent_data.get("lemma", []),
317
- "ana": tgt_sent_data.get("ana", []),
318
- "msd": tgt_sent_data.get("msd", []),
319
- "ne_tag": tgt_sent_data.get("ne_tag", []),
320
- "space_after": tgt_sent_data.get("space_after", [])
321
- },
322
- "corrections": corr_data
323
- }
324
-
325
- @staticmethod
326
- def aggregate_pars(sent_level_data):
327
- # TODO: the code is a copypaste of the document aggregation, with an additional groupby - could use a refactor
328
- uniq_idx_par = 0
329
- for idx_doc, (curr_id, curr_group) in enumerate(groupby(sent_level_data, key=lambda tup: tup[1]["id_doc"])):
330
- curr_instances = list(map(lambda tup: tup[1], curr_group)) # remove the redundant index info from datasets
331
-
332
- # Some sentences have no `idx_src_par` because they are added by the teacher (not present in the source)
333
- for idx_par, curr_par_group in groupby(
334
- curr_instances,
335
- key=lambda _inst: _inst["idx_src_par"][0] if len(_inst["idx_src_par"]) > 0 else
336
- _inst["idx_tgt_par"][0]
337
- ):
338
- src_tokens, tgt_tokens, mapped_corrections = [], [], []
339
- src_ling_anns = {"lemma": [], "ana": [], "msd": [], "ne_tag": [], "space_after": []}
340
- tgt_ling_anns = {"lemma": [], "ana": [], "msd": [], "ne_tag": [], "space_after": []}
341
- seen_src_tokens, seen_tgt_tokens = {}, {}
342
- src_base, tgt_base = 0, 0
343
- prev_src_base, prev_tgt_base = 0, 0
344
-
345
- doc_title, is_validated = None, None
346
- for curr_inst in curr_par_group:
347
- doc_title, is_validated = curr_inst["doc_title"], curr_inst["is_manually_validated"]
348
-
349
- id_src_toks, id_tgt_toks = curr_inst["id_src_tokens"], curr_inst["id_tgt_tokens"]
350
- curr_src_toks, curr_tgt_toks = curr_inst["src_tokens"], curr_inst["tgt_tokens"]
351
- curr_src_anns, curr_tgt_anns = curr_inst["src_ling_annotations"], curr_inst["tgt_ling_annotations"]
352
- curr_corrs = curr_inst["corrections"]
353
-
354
- num_added_src, num_added_tgt = 0, 0
355
- for idx_position, (id_tok, tok) in enumerate(zip(id_src_toks, curr_src_toks)):
356
- if id_tok not in seen_src_tokens:
357
- src_tokens.append(tok)
358
- src_ling_anns["lemma"].append(curr_src_anns["lemma"][idx_position])
359
- src_ling_anns["ana"].append(curr_src_anns["ana"][idx_position])
360
- src_ling_anns["msd"].append(curr_src_anns["msd"][idx_position])
361
- src_ling_anns["ne_tag"].append(curr_src_anns["ne_tag"][idx_position])
362
- src_ling_anns["space_after"].append(curr_src_anns["space_after"][idx_position])
363
-
364
- seen_src_tokens[id_tok] = tok
365
- num_added_src += 1
366
-
367
- for idx_position, (id_tok, tok) in enumerate(zip(id_tgt_toks, curr_tgt_toks)):
368
- if id_tok not in seen_tgt_tokens:
369
- tgt_tokens.append(tok)
370
- tgt_ling_anns["lemma"].append(curr_tgt_anns["lemma"][idx_position])
371
- tgt_ling_anns["ana"].append(curr_tgt_anns["ana"][idx_position])
372
- tgt_ling_anns["msd"].append(curr_tgt_anns["msd"][idx_position])
373
- tgt_ling_anns["ne_tag"].append(curr_tgt_anns["ne_tag"][idx_position])
374
- tgt_ling_anns["space_after"].append(curr_tgt_anns["space_after"][idx_position])
375
-
376
- seen_tgt_tokens[id_tok] = tok
377
- num_added_tgt += 1
378
-
379
- if num_added_src == 0:
380
- src_base, prev_src_base = prev_src_base, src_base
381
-
382
- if num_added_tgt == 0:
383
- tgt_base, prev_tgt_base = prev_tgt_base, tgt_base
384
-
385
- for corr in curr_corrs:
386
- mapped_corrections.append({
387
- "idx_src": list(map(lambda _i: src_base + _i, corr["idx_src"])),
388
- "idx_tgt": list(map(lambda _i: tgt_base + _i, corr["idx_tgt"])),
389
- "corr_types": corr["corr_types"]
390
- })
391
-
392
- src_base += num_added_src
393
- tgt_base += num_added_tgt
394
-
395
- if num_added_src == 0:
396
- src_base, prev_src_base = prev_src_base, src_base
397
-
398
- if num_added_tgt == 0:
399
- tgt_base, prev_tgt_base = prev_tgt_base, tgt_base
400
-
401
- yield uniq_idx_par, {
402
- "id_doc": curr_id,
403
- "doc_title": doc_title,
404
- "is_manually_validated": is_validated,
405
- "src_tokens": src_tokens,
406
- "src_ling_annotations": src_ling_anns,
407
- "tgt_tokens": tgt_tokens,
408
- "tgt_ling_annotations": tgt_ling_anns,
409
- "corrections": mapped_corrections
410
- }
411
- uniq_idx_par += 1
412
-
413
- @staticmethod
414
- def aggregate_docs(sent_level_data):
415
- # NOTE: assuming here that `sent_level_data` is pre-sorted by id_doc, which is done in the raw data
416
- for idx_doc, (curr_id, curr_group) in enumerate(groupby(sent_level_data, key=lambda tup: tup[1]["id_doc"])):
417
- curr_instances = map(lambda tup: tup[1], curr_group) # remove the redundant index info from datasets
418
-
419
- src_tokens, tgt_tokens, mapped_corrections = [], [], []
420
- src_ling_anns = {"lemma": [], "ana": [], "msd": [], "ne_tag": [], "space_after": []}
421
- tgt_ling_anns = {"lemma": [], "ana": [], "msd": [], "ne_tag": [], "space_after": []}
422
- seen_src_tokens, seen_tgt_tokens = {}, {}
423
- # Need to keep the current base position of source and target tokens AND previous base position:
424
- # A source may map into multiple targets (or vice versa), but we do not want to write it twice in a doc.
425
- # Therefore, when the same sentence is encountered twice, the base is shifted to the previous one to map
426
- # the indices of corrected tokens correctly.
427
- src_base, tgt_base = 0, 0
428
- prev_src_base, prev_tgt_base = 0, 0
429
-
430
- doc_title, is_validated = None, None
431
- for curr_inst in curr_instances:
432
- doc_title, is_validated = curr_inst["doc_title"], curr_inst["is_manually_validated"]
433
-
434
- id_src_toks, id_tgt_toks = curr_inst["id_src_tokens"], curr_inst["id_tgt_tokens"]
435
- curr_src_toks, curr_tgt_toks = curr_inst["src_tokens"], curr_inst["tgt_tokens"]
436
- curr_src_anns, curr_tgt_anns = curr_inst["src_ling_annotations"], curr_inst["tgt_ling_annotations"]
437
- curr_corrs = curr_inst["corrections"]
438
-
439
- num_added_src, num_added_tgt = 0, 0
440
- for idx_position, (id_tok, tok) in enumerate(zip(id_src_toks, curr_src_toks)):
441
- if id_tok not in seen_src_tokens:
442
- src_tokens.append(tok)
443
- src_ling_anns["lemma"].append(curr_src_anns["lemma"][idx_position])
444
- src_ling_anns["ana"].append(curr_src_anns["ana"][idx_position])
445
- src_ling_anns["msd"].append(curr_src_anns["msd"][idx_position])
446
- src_ling_anns["ne_tag"].append(curr_src_anns["ne_tag"][idx_position])
447
- src_ling_anns["space_after"].append(curr_src_anns["space_after"][idx_position])
448
-
449
- seen_src_tokens[id_tok] = tok
450
- num_added_src += 1
451
-
452
- for idx_position, (id_tok, tok) in enumerate(zip(id_tgt_toks, curr_tgt_toks)):
453
- if id_tok not in seen_tgt_tokens:
454
- tgt_tokens.append(tok)
455
- tgt_ling_anns["lemma"].append(curr_tgt_anns["lemma"][idx_position])
456
- tgt_ling_anns["ana"].append(curr_tgt_anns["ana"][idx_position])
457
- tgt_ling_anns["msd"].append(curr_tgt_anns["msd"][idx_position])
458
- tgt_ling_anns["ne_tag"].append(curr_tgt_anns["ne_tag"][idx_position])
459
- tgt_ling_anns["space_after"].append(curr_tgt_anns["space_after"][idx_position])
460
-
461
- seen_tgt_tokens[id_tok] = tok
462
- num_added_tgt += 1
463
-
464
- if num_added_src == 0:
465
- src_base, prev_src_base = prev_src_base, src_base
466
-
467
- if num_added_tgt == 0:
468
- tgt_base, prev_tgt_base = prev_tgt_base, tgt_base
469
-
470
- for corr in curr_corrs:
471
- mapped_corrections.append({
472
- "idx_src": list(map(lambda _i: src_base + _i, corr["idx_src"])),
473
- "idx_tgt": list(map(lambda _i: tgt_base + _i, corr["idx_tgt"])),
474
- "corr_types": corr["corr_types"]
475
- })
476
-
477
- src_base += num_added_src
478
- tgt_base += num_added_tgt
479
-
480
- if num_added_src == 0:
481
- src_base, prev_src_base = prev_src_base, src_base
482
-
483
- if num_added_tgt == 0:
484
- tgt_base, prev_tgt_base = prev_tgt_base, tgt_base
485
-
486
- yield idx_doc, {
487
- "id_doc": curr_id,
488
- "doc_title": doc_title,
489
- "is_manually_validated": is_validated,
490
- "src_tokens": src_tokens,
491
- "src_ling_annotations": src_ling_anns,
492
- "tgt_tokens": tgt_tokens,
493
- "tgt_ling_annotations": tgt_ling_anns,
494
- "corrections": mapped_corrections
495
- }
496
-
497
- def _generate_examples(self, source_path, target_path, links_path):
498
- sent_level_data = list(Solar3.generate_sentences(source_path, target_path, links_path))
499
-
500
- if self.config.name == "sentence_level":
501
- # Remove IDs and indices that are only useful for aggregating the document-level data
502
- for i, instance in sent_level_data:
503
- yield i, {_k: _v for _k, _v in instance.items() if _k not in {"id_src_tokens", "id_tgt_tokens",
504
- "idx_src_par", "idx_tgt_par"}}
505
- elif self.config.name == "paragraph_level":
506
- yield from list(Solar3.aggregate_pars(sent_level_data))
507
- else:
508
- yield from list(Solar3.aggregate_docs(sent_level_data))
509
-