maier-s commited on
Commit
d86d5c0
1 Parent(s): ade7110

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +79 -94
  2. SETH-test.iob +0 -0
  3. SETH-train.iob +0 -0
  4. explorationFile.ipynb +0 -0
  5. seth.py +158 -0
README.md CHANGED
@@ -1,96 +1,81 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - token-classification
5
- language:
6
- - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
- # Dataset Card for SETH Dataset
9
-
10
- <!-- Provide a quick summary of the dataset. -->
11
-
12
- This is a dataset used to apply the DistilBert for NER Task in the Advanced Machine Learing and XAI course of the DHBW CAS in Heilbronn
13
-
14
- ## Dataset Details
15
-
16
- ### Dataset Description
17
-
18
- <!-- Provide a longer summary of what this dataset is. -->
19
- The Dataset is based on the Data that is Provided by the [Github Repositoriy](https://github.com/Erechtheus/mutationCorpora)
20
-
21
-
22
- ### Dataset Sources [optional]
23
-
24
- <!-- Provide the basic links for the dataset. -->
25
-
26
- - **Repository:** [Source of the Dataset](https://github.com/Erechtheus/mutationCorpora/tree/master/corpora/IOB)
27
- - **Information about Dataset:** [Datset Information](https://rockt.github.io/SETH/)
28
-
29
- ## Uses
30
-
31
- <!-- Address questions around how the dataset is intended to be used. -->
32
- Used for the Advanced Machine Learining and XAI course of DHBW CAS in Heilbronn
33
-
34
- ### Direct Use
35
-
36
- <!-- This section describes suitable use cases for the dataset. -->
37
- tbd
38
- [More Information Needed]
39
-
40
- ### Out-of-Scope Use
41
-
42
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
43
- tbd
44
- [More Information Needed]
45
-
46
- ## Dataset Structure
47
-
48
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
49
- tbd
50
- [More Information Needed]
51
-
52
- ## Dataset Creation
53
-
54
- ### Curation Rationale
55
-
56
- <!-- Motivation for the creation of this dataset. -->
57
- Easy Loading for further uses when executing the training
58
-
59
- [More Information Needed]
60
-
61
- ### Source Data
62
-
63
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
64
- tbd
65
- #### Data Collection and Processing
66
-
67
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
68
- tbd
69
- [More Information Needed]
70
-
71
- #### Who are the source data producers?
72
-
73
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
74
- Source Data is produced by [Github Repositoriy](https://github.com/Erechtheus/mutationCorpora) as described in the Github Repository
75
- [More Information Needed]
76
-
77
-
78
- ## Bias, Risks, and Limitations
79
-
80
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
81
-
82
- tbd
83
-
84
- ### Recommendations
85
-
86
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
87
-
88
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
89
-
90
- ## Citation [optional]
91
-
92
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
93
- tbd
94
-
95
- **BibTeX:**
96
-
 
1
  ---
2
+ dataset_info:
3
+ - config_name: Seth
4
+ features:
5
+ - name: id
6
+ dtype: int32
7
+ - name: tokens
8
+ sequence: string
9
+ - name: labels
10
+ sequence:
11
+ class_label:
12
+ names:
13
+ '0': O
14
+ '1': B-Gene
15
+ '2': B-SNP
16
+ '3': I-SNP
17
+ '4': I-Gene
18
+ '5': B-RS
19
+ '6': I-RS
20
+ splits:
21
+ - name: train
22
+ num_bytes: 1812838
23
+ num_examples: 504
24
+ - name: test
25
+ num_bytes: 438476
26
+ num_examples: 126
27
+ download_size: 0
28
+ dataset_size: 2251314
29
+ - config_name: Seth2003
30
+ features:
31
+ - name: id
32
+ dtype: int32
33
+ - name: tokens
34
+ sequence: string
35
+ - name: labels
36
+ sequence:
37
+ class_label:
38
+ names:
39
+ '0': O
40
+ '1': B-Gene
41
+ '2': B-SNP
42
+ '3': I-SNP
43
+ '4': I-Gene
44
+ '5': B-RS
45
+ '6': I-RS
46
+ splits:
47
+ - name: train
48
+ num_bytes: 1812838
49
+ num_examples: 504
50
+ - name: test
51
+ num_bytes: 438476
52
+ num_examples: 126
53
+ download_size: 0
54
+ dataset_size: 2251314
55
+ - config_name: conll2003
56
+ features:
57
+ - name: id
58
+ dtype: int32
59
+ - name: tokens
60
+ sequence: string
61
+ - name: labels
62
+ sequence:
63
+ class_label:
64
+ names:
65
+ '0': O
66
+ '1': B-Gene
67
+ '2': B-SNP
68
+ '3': I-SNP
69
+ '4': I-Gene
70
+ '5': B-RS
71
+ '6': I-RS
72
+ splits:
73
+ - name: train
74
+ num_bytes: 1812838
75
+ num_examples: 504
76
+ - name: test
77
+ num_bytes: 438476
78
+ num_examples: 126
79
+ download_size: 0
80
+ dataset_size: 2251314
81
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SETH-test.iob ADDED
The diff for this file is too large to render. See raw diff
 
SETH-train.iob ADDED
The diff for this file is too large to render. See raw diff
 
explorationFile.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
seth.py ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition"""
18
+
19
+ import re
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+
27
+ _CITATION = """\
28
+ @Article{SETH2016,
29
+ Title= {SETH detects and normalizes genetic variants in text.},
30
+ Author= {Thomas, Philippe and Rockt{\"{a}}schel, Tim and Hakenberg, J{\"{o}}rg and Lichtblau, Yvonne and Leser, Ulf},
31
+ Journal= {Bioinformatics},
32
+ Year= {2016},
33
+ Month= {Jun},
34
+ Doi= {10.1093/bioinformatics/btw234},
35
+ Language = {eng},
36
+ Medline-pst = {aheadofprint},
37
+ Pmid = {27256315},
38
+ Url = {http://dx.doi.org/10.1093/bioinformatics/btw234} Titel anhand dieser DOI in Citavi-Projekt übernehmen
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ This Dataset is used to for the Advanced Machine Learning and XAI course of the DHBW CAS in Heilbronn
44
+ """
45
+
46
+
47
+
48
+ class SethConfig(datasets.BuilderConfig):
49
+ """BuilderConfig for Seth Dataset"""
50
+
51
+ def __init__(self, **kwargs):
52
+ """BuilderConfig for Seth.
53
+
54
+ Args:
55
+ **kwargs: keyword arguments forwarded to super.
56
+ """
57
+ super(SethConfig, self).__init__(**kwargs)
58
+
59
+
60
+ class Seth(datasets.GeneratorBasedBuilder):
61
+ """Seth dataset."""
62
+
63
+ BUILDER_CONFIGS = [
64
+ SethConfig(name="Seth", version=datasets.Version("1.0.0"), description="Seth dataset"),
65
+ ]
66
+
67
+ def _info(self):
68
+ return datasets.DatasetInfo(
69
+ description=_DESCRIPTION,
70
+ features=datasets.Features(
71
+ {
72
+ "id": datasets.Value("int32"),
73
+ "tokens": datasets.Sequence(datasets.Value("string")),
74
+ "labels": datasets.Sequence(
75
+ datasets.features.ClassLabel(
76
+ names=[
77
+ "O",
78
+ "B-Gene",
79
+ "B-SNP",
80
+ "I-SNP",
81
+ "I-Gene",
82
+ "B-RS",
83
+ "I-RS"
84
+ ]
85
+ )
86
+ )
87
+ }
88
+ ),
89
+ supervised_keys=None,
90
+ homepage="https://rockt.github.io/SETH/",
91
+ citation=_CITATION,
92
+ )
93
+
94
+ def _split_generators(self, dl_manager):
95
+ """Returns SplitGenerators."""
96
+ #downloaded_file = dl_manager.download_and_extract(_URL)
97
+ data_files = {
98
+ "train": "./SETH-train.iob",
99
+ "test": "./SETH-test.iob",
100
+ }
101
+
102
+ return [
103
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
104
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_files["test"]}),
105
+ ]
106
+
107
+ def _generate_examples(self, filepath):
108
+ logger.info("⏳ Generating examples from = %s", filepath)
109
+ with open(filepath, encoding="utf-8") as f:
110
+ guid = 0
111
+ document = {"id":None,
112
+ "tokens":[],
113
+ "labels":[]
114
+ }
115
+ documents = [] # Wird befüllt mit den Documented aus der Datei. Besteht aus einem Key "tokens" und "labels"
116
+ pattern = r"#\d+" # Reg Experassion um eine Documented ID zu detektieren
117
+ for idx, line in enumerate(f):
118
+ match = re.match(pattern, line)
119
+ #Überspringe erste Zeile weil das ein Header ist
120
+ if idx == 0:
121
+ continue
122
+ # Here ist die Dokumenten ID
123
+ if match:
124
+ if document["id"] != None:
125
+ # Speichere altes Dokument bevor ein neues angelegt wird
126
+ documents.append(document)
127
+ yield guid,document
128
+ guid+=1
129
+ document = {"id":int(line[1:]), # Speichere nur die Nummer ohne die Raute
130
+ "tokens":[],
131
+ "labels":[]
132
+ }
133
+ else:
134
+ #Initialisiere neues DOkument
135
+ document = {"id":int(line[1:]), # Speichere nur die Nummer ohne die Raute
136
+ "tokens":[],
137
+ "labels":[]
138
+ }
139
+ # Hier handeln wir die Sonderfälle ab
140
+ elif len(line.split(",")) >2:
141
+ # Sonderfall 1: ,,Label
142
+ if(line.split(",")[0] == "" and line.split(",")[1]==""):
143
+ document["tokens"].append(",")
144
+ document["labels"].append(line.split(",")[2].split("\n")[0])
145
+ # Sonderfall 2:Text,Text,Test,Label -> Label steht immer am schluss
146
+ else:
147
+ document["tokens"].append(",".join(line.split(",")[0:-1])) # Bringe die Splits wieder zusammen ohne das Label
148
+ document["labels"].append(line.split(",")[-1].split("\n")[0])
149
+ # Sonst gehen wir einfach vom standard aus Word sowie Tag
150
+ else:
151
+ word_tag = line.split(",")
152
+ # Hier erkennen wir den Ende eines Satzes dieser besteht aus " , "
153
+ if word_tag[0] == " " and word_tag[1] == " \n":
154
+ continue
155
+ document["tokens"].append(word_tag[0])
156
+ document["labels"].append(word_tag[1].split("\n")[0])
157
+ documents.append(document)
158
+ yield guid,document