parquet-converter commited on
Commit
c81f38c
1 Parent(s): 28f5425

Update parquet files

Browse files
README.md DELETED
@@ -1,219 +0,0 @@
1
- ---
2
- pretty_name: XML-parsed PMC
3
- task_categories:
4
- - text-classification
5
- - summarization
6
- - other
7
- annotations_creators:
8
- - no-annotation
9
- language_creators:
10
- - expert-generated
11
- language:
12
- - en
13
- size_categories:
14
- - 1M<n<10M
15
- source_datasets:
16
- - original
17
- license:
18
- - cc0-1.0
19
- - cc-by-4.0
20
- - cc-by-sa-4.0
21
- - cc-by-nc-4.0
22
- - cc-by-nd-4.0
23
- - cc-by-nc-nd-4.0
24
- - cc-by-nc-sa-4.0
25
- - unknown
26
- - other
27
- multilinguality:
28
- - monolingual
29
- task_ids: []
30
- tags:
31
- - research papers
32
- - biology
33
- - medecine
34
- ---
35
-
36
- # Dataset Card for PMC Open Access XML
37
-
38
- ## Table of Contents
39
- - [Dataset Description](#dataset-description)
40
- - [Dataset Summary](#dataset-summary)
41
- - [Supported Tasks](#supported-tasks-and-leaderboards)
42
- - [Languages](#languages)
43
- - [Dataset Structure](#dataset-structure)
44
- - [Data Instances](#data-instances)
45
- - [Data Fields](#data-instances)
46
- - [Data Splits](#data-instances)
47
- - [Dataset Creation](#dataset-creation)
48
- - [Curation Rationale](#curation-rationale)
49
- - [Source Data](#source-data)
50
- - [Annotations](#annotations)
51
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
52
- - [Considerations for Using the Data](#considerations-for-using-the-data)
53
- - [Social Impact of Dataset](#social-impact-of-dataset)
54
- - [Discussion of Biases](#discussion-of-biases)
55
- - [Other Known Limitations](#other-known-limitations)
56
- - [Additional Information](#additional-information)
57
- - [Dataset Curators](#dataset-curators)
58
- - [Licensing Information](#licensing-information)
59
- - [Citation Information](#citation-information)
60
-
61
- ## Dataset Description
62
-
63
- - **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
64
- - **Repository:** [Needs More Information]
65
- - **Paper:** [Needs More Information]
66
- - **Leaderboard:** [Needs More Information]
67
- - **Point of Contact:** [Needs More Information]
68
-
69
- ### Dataset Summary
70
-
71
- The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under
72
- license terms that allow reuse.
73
- Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
74
- in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
75
- liberal redistribution and reuse than a traditional copyrighted work.
76
- The PMC Open Access Subset is one part of the PMC Article Datasets
77
-
78
- This version takes XML version as source, benefiting from the structured text
79
- to split the articles in parts, naming the introduction, methods, results,
80
- discussion and conclusion, and reference with keywords in the text to external or internal
81
- resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
82
-
83
- The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the
84
- references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide
85
- a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).
86
-
87
- ### Supported Tasks and Leaderboards
88
-
89
- [Needs More Information]
90
-
91
- ### Languages
92
-
93
- [Needs More Information]
94
-
95
- ## Dataset Structure
96
-
97
- ### Data Fields
98
-
99
- - "accession_id": The PMC ID of the article
100
- - "pmid": The PubMed ID of the article
101
- - "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background".
102
- - "methods": Same as introduction with "method" keyword.
103
- - "results": Same as introduction with "result" keyword.
104
- - "discussion": Same as introduction with "discussion" keyword.
105
- - "conclusion": Same as introduction with "conclusion" keyword.
106
- - "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched.
107
- - "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched.
108
- - "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched.
109
- - "figure": List of \<fig\> elements of the article.
110
- - "table": List of \<table-wrap\> and \<array\> elements of the article.
111
- - "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article.
112
- - "box": List of \<boxed-text\> elements of the article.
113
- - "code": List of \<code\> elements of the article.
114
- - "quote": List of \<disp-quote\> and \<speech\> elements of the article.
115
- - "chemical": List of \<chem-struct-wrap\> elements of the article.
116
- - "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article.
117
- - "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article.
118
- - "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article.
119
- - "media": List of \<media\> and \<inline-media\> elements of the article.
120
- - "glossary": Glossary if found in the XML
121
- - "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID
122
- - "n_references": Total number of references and unknown references
123
- - "license": The licence of the article
124
- - "retracted": If the article was retracted or not
125
- - "last_updated": Last update of the article
126
- - "citation": Citation of the article
127
- - "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: ftp.ncbi.nlm.nih.gov/pub/pmc/)
128
-
129
- In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media".
130
- ### Data Splits
131
-
132
- [Needs More Information]
133
-
134
- ## Dataset Creation
135
-
136
- ### Curation Rationale
137
-
138
- Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation
139
- for the different kind of possible usage.
140
- Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules
141
- in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those
142
- keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the
143
- work of further versions of this dataset.
144
-
145
- ### Source Data
146
-
147
- #### Initial Data Collection and Normalization
148
-
149
- Data was obtained from:
150
- - ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_noncomm/xml/
151
- - ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/
152
- - ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_other/xml/
153
-
154
- Additional content for individual articles (graphics, media) can be obtained from:
155
- - ftp.ncbi.nlm.nih.gov/pub/pmc + "package_file"
156
-
157
- #### Who are the source language producers?
158
-
159
- [Needs More Information]
160
-
161
- ### Annotations
162
-
163
- #### Annotation process
164
-
165
- [Needs More Information]
166
-
167
- #### Who are the annotators?
168
-
169
- [Needs More Information]
170
-
171
- ### Personal and Sensitive Information
172
-
173
- [Needs More Information]
174
-
175
- ## Considerations for Using the Data
176
-
177
- ### Social Impact of Dataset
178
-
179
- [Needs More Information]
180
-
181
- ### Discussion of Biases
182
-
183
- The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as
184
- well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).
185
- To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a
186
- future version.
187
-
188
- ### Other Known Limitations
189
-
190
- [Needs More Information]
191
-
192
- ### Preprocessing recommendations
193
-
194
- - Filter out empty contents.
195
- - Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself.
196
- - Unescape HTML special characters: `import html; html.unescape(my_text)`
197
- - Remove superfluous line break in text.
198
- - Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens?
199
- - Join the items of the contents' lists.
200
-
201
- ## Additional Information
202
-
203
- ### Dataset Curators
204
-
205
- [Needs More Information]
206
-
207
- ### Licensing Information
208
-
209
- https://www.ncbi.nlm.nih.gov/pmc/about/copyright/
210
-
211
- Within the PMC Open Access Subset, there are three groupings:
212
-
213
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
214
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
215
- Other - no machine-readable Creative Commons license, no license, or a custom license.
216
-
217
- ### Citation Information
218
-
219
- [Needs More Information]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
non_commercial/partial-train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc9252ea998ed907ee5e2fc060ab55be88e32f2e78c9355c6c2e2aa3f7aef89a
3
+ size 212504220
non_commercial/partial-train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a24efca564e1b3573aa739bcba53f131c6a31e4683b2e38908d278c4cc85034c
3
+ size 224176755
non_commercial/partial-train/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af66b0afaf5fe278854c521bb794f98a1ba55487901e26fb83de0656792d8c1a
3
+ size 234243104
non_commercial/partial-train/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d7c68c7bf75d74f53f3f79537842afd63dd185f7ecce4b3a6117d8db4e44e8a
3
+ size 229292455
non_commercial/partial-train/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:115eb47159eb222b11a3867b9e618a420d4c9794b45e8fa6f5bbdf327209acf3
3
+ size 224353264
non_commercial/partial-train/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:062fa332166cad5dfe72386ca18520affeeaac6a09e2d9e7f621f6362650757a
3
+ size 228341905
non_commercial/partial-train/0006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38acb1fb3ac3948a9ba9641cf07485d50f6aebf33dc803197dd85f78ae0964b4
3
+ size 234984993
non_commercial/partial-train/0007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e2626e95395462aa09cbfdc468e02979f56662cd4a503b6f68b5aac0c57bfd8
3
+ size 237959350
non_commercial/partial-train/0008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:573ffcd4306e7b31b344512c3461f844dc4043934e023722c69fe23f1ed00dbc
3
+ size 230268880
non_commercial/partial-train/0009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78627db101d0d58c01330aedf01c451933823bfe7df6e65d210f6e7696a93d77
3
+ size 173599458
pmc_open_access_xml.py DELETED
@@ -1,660 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- #
16
- # This dataset script is based on pmc/open_access.py loading script.
17
-
18
- """PMC Open Access Subset enriched from XML."""
19
-
20
- import datetime
21
- import pandas as pd
22
- import numpy as np
23
- from itertools import compress, chain
24
- from collections import defaultdict
25
- import re
26
- from lxml import etree
27
- import json
28
-
29
- import datasets
30
- from datasets.tasks import LanguageModeling
31
-
32
-
33
- # TODO: Add BibTeX citation
34
- # Find for instance the citation on arxiv or on the dataset repo/website
35
- _CITATION = ""
36
-
37
- _DESCRIPTION = """\
38
- The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
39
- license terms that allow reuse.
40
- Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
41
- in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
42
- liberal redistribution and reuse than a traditional copyrighted work.
43
- The PMC Open Access Subset is one part of the PMC Article Datasets
44
-
45
- This version takes XML version as source, benefiting from the structured text
46
- to split the articles in parts, naming the introduction, methods, results,
47
- discussion and conclusion, and refers with keywords in the text to external or internal
48
- resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
49
- """
50
-
51
- _HOMEPAGE = "https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/"
52
-
53
- # TODO: Add the licence for the dataset here if you can find it
54
- _LICENSE = """
55
- https://www.ncbi.nlm.nih.gov/pmc/about/copyright/
56
-
57
- Within the PMC Open Access Subset, there are three groupings:
58
-
59
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
60
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
61
- Other - no machine-readable Creative Commons license, no license, or a custom license.
62
- """
63
-
64
- _URL_ROOT = "https://ftp.ncbi.nlm.nih.gov/pub/pmc/"
65
- _URL = _URL_ROOT+"oa_bulk/{subset}/xml/"
66
-
67
- _SUBSETS = {
68
- "commercial": "oa_comm",
69
- "non_commercial": "oa_noncomm",
70
- "other": "oa_other",
71
- }
72
- _BASELINE_DATE = "2022-11-18"
73
-
74
- REFS_KEYS = ["pmid_ref", "unknown_pub_ref", "figure_ref", "table_ref", "formula_ref", "box_ref", "code_ref",
75
- "quote_ref", "chemical_ref", "supplementary_ref", "footnote_ref", "graphic_ref", "media_ref"]
76
- CONTENT_KEYS = ["introduction", "methods", "results", "discussion", "conclusion",
77
- "front", "body", "back", "figure", "table", "formula", "box",
78
- "code", "quote", "chemical", "supplementary", "footnote"]
79
- begin_doc_rgx = re.compile("""<!DOCTYPE.*""")
80
- def clean_raw(xml_text):
81
- """
82
- Fixes the formating of xml of files and returns it.
83
- Some have bad formating but they can be fixed/improved
84
- """
85
- #Some XML can't be parsed because they are not starting with the DOCTYPE declaration
86
- # Could be disabled if we handle the parsing error (TBD, how many files would be trashed)
87
-
88
- begin_doc = begin_doc_rgx.search(xml_text)
89
- xml_text = xml_text[begin_doc.start():]
90
-
91
- #Some XML are poisoned with consecutive tabs and new lines
92
- # xml_text = re.sub('\s+',' ',xml_text) # Commented because <code> requires those spacing
93
- return xml_text
94
-
95
- # Tag name to "reference type" linking
96
- TAG_DIC = {"fig":("FIG","figure_ref"), "table-wrap":("TAB","table_ref"),
97
- "array":("TAB","table_ref"), "boxed-text":("BOX","box_ref"),
98
- "graphic":("GRAPH","graphic_ref"), "inline-graphic":("GRAPH","graphic_ref"),
99
- "media":("MEDIA","media_ref"), "inline-media":("MEDIA","media_ref"),
100
- "disp-formula":("FORMU","formula_ref"), "inline-formula":("FORMU","formula_ref"),
101
- "table-wrap-foot":("FOOTN","footnote_ref"), "fn-group":("FOOTN","footnote_ref"),
102
- "code":("CODE","code_ref"), "chem-struct-wrap":("CHEM","chemical_ref"),
103
- "disp-quote":("QUOTE","quote_ref"), "speech":("QUOTE","quote_ref"),
104
- "supplementary-material":("SUPPL","supplementary_ref"),
105
- "inline-supplementary-material":("SUPPL","supplementary_ref")}
106
-
107
- def get_ref_indexes(ref_el_l, refs_pmid, refs_nonpmid_keys):
108
- """
109
- For all the element found as xref, give them an index to be later found in their corresponding section.
110
- Also sort them into the different types of references (eg <array> and <table-wrap> are both
111
- labeled as table_ref).
112
- """
113
- count_ref_d = defaultdict(lambda:0)
114
- reference_d = {}
115
- for k, v in refs_pmid.items():
116
- reference_d[k] = (v, "REF", "pmid_ref")
117
- for i, k in enumerate(refs_nonpmid_keys):
118
- reference_d[k] = (i, "UREF", "unknown_pub_ref")
119
-
120
- refs_key_l = []
121
- for el in ref_el_l:
122
- keyword, ref_name = TAG_DIC[el.tag]
123
- idx = count_ref_d[ref_name]
124
- key = el.attrib["id"] if "id" in el.attrib.keys() else f"{el.tag}{idx}"
125
- reference_d[key] = (idx, keyword, ref_name)
126
- refs_key_l.append(key)
127
- count_ref_d[ref_name]+=1
128
- return reference_d, refs_key_l
129
-
130
- def parseout_el_refs(el, rids):
131
- """
132
- Extract the text from the tag opening to its closing, discarding the tail's text.
133
- Removes xml namespace from the text for storage savings, such as:
134
- - xmlns:xlink="http://www.w3.org/1999/xlink"
135
- - xmlns:mml="http://www.w3.org/1998/Math/MathML"
136
-
137
- Extract then from the text all the references founds to the rids dictionnary,
138
- and replace them by keywords of the corresponding family (eg "##FIG##4##Doe 2022##" for a figure,
139
- "##TAB##0##Table 1##" for a table, or "##MATHS##1##(2)##" for mathematical formulas)
140
-
141
- The range reference (e.g. 1-3 or 15-17) are replaced by their range (1,2,3 or 15,16,17)
142
-
143
- Returns the parsed text
144
- """
145
- for xref in el.xpath(".//xref"):
146
- inner_text = "".join(xref.itertext())
147
- if inner_text == "": # Removing "empty" references
148
- tail = xref.tail if xref.tail else ""
149
- prev_el = xref.getprevious()
150
- parent = xref.getparent()
151
- if prev_el is None:
152
- parent.text = "".join([(parent.text if parent.text else ""), tail])
153
- else:
154
- prev_el.tail = "".join([(prev_el.tail if prev_el.tail else ""), tail])
155
- parent.remove(xref)
156
-
157
- res_rid = defaultdict(list)
158
- res_reftext = defaultdict(list)
159
- ref_rstart, ref_rstop = None, None
160
- has_ref_range = None
161
- for xref in el.xpath(".//xref[not(ancestor::xref)]"): #Ignore innermost of imbricated references
162
- inner_text = "".join(xref.itertext())
163
- parent = xref.getparent()
164
- rid = xref.get("rid")
165
- if rid in rids.keys():
166
- ref_idx, ref_kword, ref_class = rids[rid]
167
- res_rid[ref_class].append(ref_idx)
168
- res_reftext[ref_class].append(inner_text)
169
-
170
- tail = xref.tail if xref.tail else ""
171
- #### START HANDLING REF RANGE ########
172
- try:
173
- if has_ref_range is None:
174
- if ref_kword in ["UREF", "REF"]: # Otherwise it's a year
175
- has_ref_range = res_reftext[ref_class][-1].isnumeric() and int(res_reftext[ref_class][-1]) < 500
176
-
177
- if has_ref_range and ref_kword in ["UREF", "REF"]:
178
- if tail=="-":
179
- ref_rstart = int(res_reftext[ref_class][-1])
180
- tail = ", "
181
- elif ref_rstart is not None:
182
- ref_rstop = int(res_reftext[ref_class][-1])
183
- new_ref_kwords = [f"##{ref_kword}##{ref_idx}##{inner_text}##"]
184
- for i in range(ref_rstart+1, ref_rstop):
185
- new_rid = re.sub(str(ref_rstop), str(i), rid, count=1)
186
- ref_idx_, ref_kword_, ref_class_ = rids[new_rid]
187
- res_rid[ref_class_].insert(-1, ref_idx_)
188
- res_reftext[ref_class_].insert(-1, str(i))
189
- new_ref_kwords.insert(-1, f"##{ref_kword_}##{ref_idx_}##{str(i)}##")
190
- ref_kword = ", ".join(new_ref_kwords)
191
- ref_rstart = None
192
- except (KeyError, ValueError):
193
- ref_rstart = None
194
- continue # The substitution failed, happen when text don't match the rid
195
- #### END HANDLING REF RANGE ########
196
-
197
- prev_el = xref.getprevious()
198
- if prev_el is None:
199
- parent.text = "".join([(parent.text if parent.text else ""), f"##{ref_kword}##{ref_idx}##{inner_text}##", tail])
200
- else:
201
- prev_el.tail = "".join([(prev_el.tail if prev_el.tail else ""), f"##{ref_kword}##{ref_idx}##{inner_text}##", tail])
202
- parent.remove(xref)
203
-
204
- text = etree.tostring(el, with_tail=False, encoding='unicode', method='xml')
205
- #Removing the xml namespace, (otherwise they would be everywhere)
206
- tag_start = text.find(">")+1
207
- tag_txt = text[:tag_start]
208
-
209
- for k, v in el.nsmap.items():
210
- tag_txt = tag_txt.replace(f' xmlns:{k}="{v}"', "", 1)
211
-
212
- text = "".join([tag_txt, text[tag_start:]])
213
-
214
- return text
215
-
216
-
217
- def get_references(article_tree):
218
- """
219
- Retrieve two dictionnaries of the bibr references for that article.
220
- The first has the references' PMID for those having one.
221
- The second contains the <ref> tag fields, that could be identified to retrieve the
222
- referenced documents (some have PMID that could be found from the title and authors of a document).
223
- """
224
- references_pmid = {}
225
- references_nonpmid = []
226
- references_nonpmid_keys = []
227
- refs = article_tree.find(".//ref-list")
228
- if refs is None: #Some don't have any references
229
- return {}, [], []
230
- refs = refs.findall("ref")
231
- for i, ref in enumerate(refs):
232
- pmid = None
233
- for pubid in ref.findall(".//pub-id"):
234
- if pubid.get("pub-id-type") == "pmid":
235
- pmid = int(pubid.text)
236
- break
237
- if pmid is not None and pmid<100000000:
238
- #In an article (oa_comm:PMC2679651), broken PMID were found (>10e9).
239
- #May be several of those. Not sure what to do with them, and what threshold to use
240
- #Keeping them would result in loosing info about the reference (article title, authors, ...)
241
-
242
- #Only the PMID is kept, as it links to the documents in pubmed abstract dataset.
243
- references_pmid[ref.attrib["id"]] = str(pmid)
244
- else:
245
- ref_key = ref.attrib["id"] if "id" in ref.attrib.keys() else f"URef{i+1}"
246
- citation_d = defaultdict(list)
247
- #Authors are the only elements that can come in multiples (I could be wrong)
248
- for el in ref.iterdescendants():
249
- if isinstance(el.text, str) and isinstance(el.tag, str):
250
- citation_d[el.tag].append(el.text)
251
- references_nonpmid.append(dict(citation_d))
252
- references_nonpmid_keys.append(ref_key)
253
- return references_pmid, references_nonpmid, references_nonpmid_keys
254
-
255
- def construct_datadict(article_tree):
256
- """
257
- Where the magic happens. A long script that:
258
- - Get the external references (from pmid if present)
259
- - Get glossary and remove it from the document
260
- - Find internal references (figures, tables, ...) and build a xref dictionary
261
- - Extract paragraphs and titles with their path in the document
262
- - Titles are used to identify ["introduction", "methods", "results" and "discussion"]
263
- - The path are then used to group paragraphs and titles into corresponding content.
264
- - Remaining p and title are put in three other section: front, body, back
265
-
266
- Returns:
267
- - content_d: Dictionnary with the content result
268
- - reference_d: The references of each kind (figure, table, ...) for each content type (intro, figure caption, ...)
269
- - reference_text_d: The replaced text by the keywords of the references, with keys matching reference_d.
270
- - reference_count: The count of unique external-document references.
271
-
272
- Useful information about the tags can be found here: https://jats.nlm.nih.gov/archiving/tag-library/1.3/
273
- """
274
- res_content_d = {}
275
-
276
- refs_pmid, refs_nonpmid, refs_nonpmid_keys = get_references(article_tree)
277
- reference_count = len(refs_pmid)+len(refs_nonpmid)
278
-
279
- res_content_d["unknown_pub"] = json.dumps(refs_nonpmid)
280
- refs_el = article_tree.find(".//ref-list")
281
- if refs_el is not None:
282
- refs_el.getparent().remove(refs_el)
283
-
284
- # Extracts the glossary if exists, and removes it from the tree
285
- glossary = {}
286
- def search_def(el):
287
- for item in el.findall(".//def-item"):
288
- abbrev = item.find(".//term")
289
- if abbrev is None:
290
- continue
291
- k = "".join(abbrev.itertext())
292
- definition = item.find(".//def")
293
- definition = "".join(definition.itertext()) if definition is not None else ""
294
- glossary[k] = definition
295
-
296
- for el in article_tree.findall(".//glossary"):
297
- search_def(el)
298
- el.getparent().remove(el)
299
- for el in article_tree.findall(".//def-list"):
300
- search_def(el) #There may be still more def-list outside of a glossary
301
- el.getparent().remove(el)
302
- res_content_d["glossary"] = glossary
303
-
304
- # After testing, no question were found in the dataset, so I commented that part
305
- # question_l = []
306
- # for el in article_tree.xpath(".//question-preamble|.//question|.//answer|.//explanation"):
307
- # text = parseout_el_refs(el, {})
308
- # question_l.append(text)
309
- # res_content_d["question"] = "\n".join(question_l)
310
- # for el in article_tree.xpath(".//question-wrap-group|.//question-wrap|.//answer-set|.//explanation"):
311
- # el.getparent().remove(el)
312
-
313
- # One big query is faster than multiple small ones
314
- ref_el_l = article_tree.xpath(".//fig|.//table-wrap|.//array|.//supplementary-material\
315
- |.//inline-supplementary-material|.//disp-formula\
316
- |.//inline-formula|.//graphic|.//inline-graphic\
317
- |.//media|.//inline-media|.//boxed-text\
318
- |.//table-wrap-foot|.//fn-group|.//chem-struct-wrap\
319
- |.//code|.//disp-quote|.//speech")
320
- rids, key_l = get_ref_indexes(ref_el_l, refs_pmid, refs_nonpmid_keys)
321
- text_l_d = defaultdict(list)
322
- for el, key in zip(ref_el_l[::-1], key_l[::-1]):
323
- #The iteration is done backward to always process first the most inner reference,
324
- # it makes the processing is agnostic to structure rules differences between articles
325
- new_text = parseout_el_refs(el, rids)
326
-
327
- ref_class = rids[key][2]
328
- text_l_d[ref_class].insert(0, new_text)
329
-
330
- repl_xref = etree.Element("xref", attrib={"rid":key})
331
- repl_xref.tail = el.tail
332
- el.addprevious(repl_xref)
333
- el.getparent().remove(el)
334
-
335
- # Finally, the discovered references and text are added to the result
336
- for ref_k in REFS_KEYS[2:]: #Slicing from 2, to not add pmid and unknown ref here
337
- res_content_d[ref_k[:-4]] = text_l_d[ref_k]#"\n".join(text_l_d[ref_k])
338
-
339
- path_l, text_l = [], []
340
- t_paths, t_texts_lowcase = [], []
341
- for part in ["front", "body", "back"]: #Iterate parts and insert first front and back
342
- tmp_path_l, tmp_text_l = [], []
343
- tmp_t_paths, tmp_t_texts_lowcase = [], []
344
- part_el = article_tree.find(".//"+part)
345
- if part_el is None:
346
- res_content_d[part] = []
347
- continue
348
- #Only the outermost p are kept, to prevent duplication.
349
- #Also seen title with p inside. not(ancestor::title) prevents duplication of that p
350
- for el in part_el.xpath(".//p[not(ancestor::p) and not(ancestor::title)]| .//title[not(ancestor::p) and not(ancestor::title)]"):
351
- new_text = parseout_el_refs(el, rids)
352
- tmp_path_l.append(article_tree.getelementpath(el))
353
- tmp_text_l.append(new_text)
354
- if el.tag=="title":
355
- tmp_t_paths.append(tmp_path_l[-1])
356
- tmp_t_texts_lowcase.append(new_text.lower())
357
- if part=="body": #We keep the body for processing right bellow.
358
- path_l, text_l = tmp_path_l, tmp_text_l
359
- t_paths, t_texts_lowcase = tmp_t_paths, tmp_t_texts_lowcase
360
- else:
361
- res_content_d[part] = tmp_text_l
362
-
363
- # Figuring from the titles which are the different categories
364
- mask_intro = np.array(["introduction" in t_text or "background" in t_text for t_text in t_texts_lowcase]).astype(bool)
365
- mask_metho = np.array(["method" in t_text for t_text in t_texts_lowcase]).astype(bool)
366
- mask_resul = np.array(["result" in t_text for t_text in t_texts_lowcase]).astype(bool)
367
- mask_discu = np.array(["discussion" in t_text for t_text in t_texts_lowcase]).astype(bool)
368
- mask_concl = np.array(["conclusion" in t_text for t_text in t_texts_lowcase]).astype(bool)
369
- processed_mask = np.zeros(len(text_l), dtype="bool")
370
- for mask, name_section in zip([mask_intro, mask_metho, mask_resul, mask_discu, mask_concl],
371
- ["introduction", "methods", "results", "discussion", "conclusion"]):
372
- if not np.any(mask):
373
- res_content_d[name_section] = []
374
- continue
375
-
376
- filtered_path_l = list(compress(t_paths, mask))
377
- levels = np.array([len(path.split("/")) for path in filtered_path_l])
378
- root_path = filtered_path_l[np.argmin(levels)]
379
- root_path = root_path[:root_path.rindex("/")]
380
- mask_contents = np.array([path.startswith(root_path) for path in path_l]).astype(bool)
381
- processed_mask |= mask_contents
382
- res_content_d[name_section] = list(compress(text_l, mask_contents))
383
-
384
- processed_mask = ~processed_mask #Finally, add the body part as everything that don't belong to previous categories
385
- res_content_d["body"] = list(compress(text_l, processed_mask))
386
-
387
- return (res_content_d, reference_count)
388
-
389
- class OpenAccessXMLConfig(datasets.BuilderConfig):
390
- """BuilderConfig for the PMC Open Access Subset."""
391
-
392
- def __init__(self, subsets=None, **kwargs):
393
- """BuilderConfig for the PMC Open Access Subset.
394
- Args:
395
- subsets (:obj:`List[str]`): List of subsets/groups to load.
396
- **kwargs: Keyword arguments forwarded to super.
397
- """
398
- subsets = [subsets] if isinstance(subsets, str) else subsets
399
- super().__init__(
400
- name="+".join(subsets), **kwargs,
401
- )
402
- self.subsets = subsets if self.name != "all" else list(_SUBSETS.keys())
403
-
404
-
405
- class OpenAccessXML(datasets.GeneratorBasedBuilder):
406
- """PMC Open Access Subset enriched from XML files."""
407
-
408
- VERSION = datasets.Version("1.0.0")
409
- BUILDER_CONFIG_CLASS = OpenAccessXMLConfig
410
- BUILDER_CONFIGS = [OpenAccessXMLConfig(subsets="all")] + [OpenAccessXMLConfig(subsets=subset) for subset in _SUBSETS]
411
- DEFAULT_CONFIG_NAME = "all"
412
-
413
- def _info(self):
414
- return datasets.DatasetInfo(
415
- description=_DESCRIPTION,
416
- features=datasets.Features(
417
- {
418
- "accession_id": datasets.Value("string"),
419
- "pmid": datasets.Value("string"),
420
-
421
- "introduction": datasets.features.Sequence(datasets.Value("string")),
422
- "methods": datasets.features.Sequence(datasets.Value("string")),
423
- "results": datasets.features.Sequence(datasets.Value("string")),
424
- "discussion": datasets.features.Sequence(datasets.Value("string")),
425
- "conclusion": datasets.features.Sequence(datasets.Value("string")),
426
-
427
- "front": datasets.features.Sequence(datasets.Value("string")),
428
- "body": datasets.features.Sequence(datasets.Value("string")),
429
- "back": datasets.features.Sequence(datasets.Value("string")),
430
-
431
- "figure": datasets.features.Sequence(datasets.Value("string")),
432
- "table": datasets.features.Sequence(datasets.Value("string")),
433
- "formula": datasets.features.Sequence(datasets.Value("string")),
434
- "box": datasets.features.Sequence(datasets.Value("string")),
435
- "code": datasets.features.Sequence(datasets.Value("string")),
436
- "quote": datasets.features.Sequence(datasets.Value("string")),
437
- "chemical": datasets.features.Sequence(datasets.Value("string")),
438
- "supplementary": datasets.features.Sequence(datasets.Value("string")),
439
- "footnote": datasets.features.Sequence(datasets.Value("string")),
440
- "graphic": datasets.features.Sequence(datasets.Value("string")),
441
- "media": datasets.features.Sequence(datasets.Value("string")),
442
-
443
- "unknown_pub": datasets.Value("string"),
444
- # "question": datasets.Value("string"),
445
- "glossary": datasets.features.Sequence(
446
- {"acronym": datasets.Value("string"), "definition": datasets.Value("string")}
447
- ),
448
- "n_references": datasets.Value("int32"),
449
- "license": datasets.Value("string"),
450
- "retracted": datasets.Value("string"),
451
- "last_updated": datasets.Value("string"),
452
- "citation": datasets.Value("string"),
453
- "package_file": datasets.Value("string"),
454
- }
455
- ),
456
- homepage=_HOMEPAGE,
457
- license=_LICENSE,
458
- citation=_CITATION,
459
- task_templates=[LanguageModeling(text_column="content")],
460
- )
461
-
462
- def _split_generators(self, dl_manager):
463
-
464
- incremental_paths = {
465
- "incremental_file_lists": [],
466
- "incremental_archives": []
467
- }
468
-
469
- baseline_package_list = dl_manager.download(f"{_URL_ROOT}oa_file_list.csv")
470
-
471
- baseline_file_lists = []
472
- baseline_archives = []
473
- for subset in self.config.subsets:
474
- url = _URL.format(subset=_SUBSETS[subset])
475
- basename = f"{_SUBSETS[subset]}_xml."
476
- # Baselines
477
- baselines = [f"PMC00{i}xxxxxx.baseline.{_BASELINE_DATE}" for i in range(9)]
478
-
479
- for baseline in baselines:
480
- baseline_file_list_url = f"{url}{basename}{baseline}.filelist.csv"
481
- baseline_archive_url = f"{url}{basename}{baseline}.tar.gz"
482
- try:
483
- baseline_file_list = dl_manager.download(baseline_file_list_url)
484
- baseline_archive = dl_manager.download(baseline_archive_url)
485
- except FileNotFoundError: # non-commercial PMC000xxxxxx baseline does not exist
486
- continue
487
-
488
- baseline_file_lists.append(baseline_file_list)
489
- baseline_archives.append(baseline_archive)
490
-
491
- baseline_file_list_url = f"{url}{basename}{baseline}.filelist.csv"
492
-
493
- # Incremental commented because some articles are already in the main parts (updates?)
494
- # Need to find a way to add them to the dataset without duplicating the articles.
495
- # Also adding them would mean that each new day the dataset is loaded, the whole dataset is recreated.
496
- date_delta = datetime.date.today() - datetime.date.fromisoformat(_BASELINE_DATE)
497
- incremental_dates = [
498
- (datetime.date.fromisoformat(_BASELINE_DATE) + datetime.timedelta(days=i + 1)).isoformat()
499
- for i in range(date_delta.days)
500
- ]
501
- incrementals = [f"incr.{date}" for date in incremental_dates]
502
- for incremental in incrementals:
503
- incremental_file_list_url = f"{url}{basename}{incremental}.filelist.csv"
504
- incremental_archive_url = f"{url}{basename}{incremental}.tar.gz"
505
- try:
506
- incremental_file_list = dl_manager.download(incremental_file_list_url)
507
- incremental_archive = dl_manager.download(incremental_archive_url)
508
- except FileNotFoundError: # Some increment might not exist
509
- continue
510
- incremental_paths["incremental_file_lists"].append(incremental_file_list)
511
- incremental_paths["incremental_archives"].append(incremental_archive)
512
-
513
- return [
514
- datasets.SplitGenerator(
515
- name=datasets.Split.TRAIN,
516
- gen_kwargs={
517
- "baseline_file_lists": baseline_file_lists,
518
- "baseline_archives": [dl_manager.iter_archive(archive) for archive in baseline_archives],
519
- "baseline_package_list": baseline_package_list,
520
- "incremental_file_lists": incremental_paths["incremental_file_lists"],
521
- "incremental_archives": [dl_manager.iter_archive(archive) for archive in incremental_paths["incremental_archives"]],
522
- },
523
- ),
524
- ]
525
-
526
- def _generate_examples(self, baseline_file_lists, baseline_archives, baseline_package_list, incremental_file_lists, incremental_archives):
527
- #Loading the file listing folders of individual PMC Article package (with medias and graphics)
528
- oa_package_list = pd.read_csv(baseline_package_list, index_col="Accession ID")
529
- oa_package_list = oa_package_list[["File"]]
530
- oa_package_list.sort_index(inplace=True)
531
- processed_ids = set()
532
-
533
- # Incrementals
534
- if incremental_file_lists:
535
- for incremental_file_list, incremental_archive in zip(incremental_file_lists[::-1], incremental_archives[::-1]):
536
- try:
537
- incrementals = pd.read_csv(incremental_file_list, index_col="AccessionID")
538
- except FileNotFoundError: # File not found can happen here in stream mode
539
- continue
540
- incrementals = incrementals.join(oa_package_list).reset_index().set_index("Article File")
541
- incrementals.File = incrementals.File.fillna('')
542
- incrementals = incrementals.to_dict(orient="index")
543
-
544
- for path, file in incremental_archive:
545
- data = incrementals.pop(path)
546
- pmcid = data["AccessionID"]
547
- if pmcid in processed_ids: #oa_package_list.loc[pmcid, "yet_processed"]:
548
- continue
549
- content = file.read()
550
- try:
551
- text = content.decode("utf-8").strip()
552
- except UnicodeDecodeError as e:
553
- text = content.decode("latin-1").strip()
554
- text = clean_raw(text)
555
- try:
556
- article_tree = etree.ElementTree(etree.fromstring(text))
557
- except etree.XMLSyntaxError: #In some files, xml is broken
558
- continue
559
-
560
- content_d, n_ref = construct_datadict(article_tree)
561
- glossary = np.array([[k,v] for k,v in content_d["glossary"].items()])
562
- data = {
563
- "introduction": content_d["introduction"],
564
- "methods": content_d["methods"],
565
- "results": content_d["results"],
566
- "discussion": content_d["discussion"],
567
- "conclusion": content_d["conclusion"],
568
- "front": content_d["front"],
569
- "body": content_d["body"],
570
- "back": content_d["back"],
571
- "figure": content_d["figure"],
572
- "table": content_d["table"],
573
- "formula": content_d["formula"],
574
- "box": content_d["box"],
575
- "code": content_d["code"],
576
- "quote": content_d["quote"],
577
- "chemical": content_d["chemical"],
578
- "supplementary": content_d["supplementary"],
579
- "footnote": content_d["footnote"],
580
- "graphic": content_d["graphic"],
581
- "media": content_d["media"],
582
- # "question": content_d["question"],
583
- "unknown_pub": content_d["unknown_pub"],
584
- "glossary": {"acronym":glossary[:,0], "definition":glossary[:,1]} if len(glossary)>0 else {"acronym":[], "definition":[]},
585
- "n_references": n_ref,
586
- "pmid": data["PMID"],
587
- "accession_id": pmcid,
588
- "license": data["License"],
589
- "last_updated": data["LastUpdated (YYYY-MM-DD HH:MM:SS)"],
590
- "retracted": data["Retracted"],
591
- "citation": data["Article Citation"],
592
- "package_file": data["File"],
593
- }
594
- processed_ids.add(pmcid)
595
- yield pmcid, data
596
-
597
- # Baselines
598
- for baseline_file_list, baseline_archive in zip(baseline_file_lists, baseline_archives):
599
-
600
- #try:
601
- baselines = pd.read_csv(baseline_file_list, index_col="AccessionID")
602
- baselines = baselines.join(oa_package_list).reset_index().set_index("Article File")
603
- baselines.File = baselines.File.fillna('')
604
- baselines = baselines.to_dict(orient="index")
605
-
606
- for path, file in baseline_archive:
607
- data = baselines.pop(path)
608
- pmcid = data["AccessionID"]
609
- if pmcid in processed_ids:
610
- continue
611
- content = file.read()
612
- try:
613
- text = content.decode("utf-8").strip()
614
- except UnicodeDecodeError as e:
615
- text = content.decode("latin-1").strip()
616
- text = clean_raw(text)
617
- try:
618
- article_tree = etree.ElementTree(etree.fromstring(text))
619
- except etree.XMLSyntaxError: #In some files, xml is broken
620
- continue
621
-
622
- content_d, n_ref = construct_datadict(article_tree)
623
- glossary = np.array([[k,v] for k,v in content_d["glossary"].items()])
624
- data = {
625
- "introduction": content_d["introduction"],
626
- "methods": content_d["methods"],
627
- "results": content_d["results"],
628
- "discussion": content_d["discussion"],
629
- "conclusion": content_d["conclusion"],
630
- "front": content_d["front"],
631
- "body": content_d["body"],
632
- "back": content_d["back"],
633
- "figure": content_d["figure"],
634
- "table": content_d["table"],
635
- "formula": content_d["formula"],
636
- "box": content_d["box"],
637
- "code": content_d["code"],
638
- "quote": content_d["quote"],
639
- "chemical": content_d["chemical"],
640
- "supplementary": content_d["supplementary"],
641
- "footnote": content_d["footnote"],
642
- "graphic": content_d["graphic"],
643
- "media": content_d["media"],
644
- # "question": content_d["question"],
645
- "unknown_pub": content_d["unknown_pub"],
646
- "glossary": {"acronym":glossary[:,0], "definition":glossary[:,1]} if len(glossary)>0 else {"acronym":[], "definition":[]},
647
- "n_references": n_ref,
648
- "pmid": data["PMID"],
649
- "accession_id": pmcid,
650
- "license": data["License"],
651
- "last_updated": data["LastUpdated (YYYY-MM-DD HH:MM:SS)"],
652
- "retracted": data["Retracted"],
653
- "citation": data["Article Citation"],
654
- "package_file": data["File"],
655
- }
656
- processed_ids.add(pmcid)
657
- yield pmcid, data
658
-
659
- #except FileNotFoundError: # non-commercial PMC000xxxxxx baseline does not exist
660
- # continue