Upload 6 files
Browse files
wikipedia_datasets/README.md
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
link: https://dbpedia.org/sparql/
|
3 |
+
## Qury en
|
4 |
+
|
5 |
+
DIseases in en: 16643
|
6 |
+
|
7 |
+
```
|
8 |
+
PREFIX dbo: <http://dbpedia.org/ontology/>
|
9 |
+
SELECT (COUNT(?disease) AS ?count)
|
10 |
+
WHERE {
|
11 |
+
?disease rdf:type dbo:Disease ;
|
12 |
+
rdfs:label ?diseaseLabel ;
|
13 |
+
dbo:abstract ?abstract .
|
14 |
+
FILTER(LANG(?diseaseLabel) = 'en' && LANG(?abstract) = 'en')
|
15 |
+
}
|
16 |
+
|
17 |
+
```
|
18 |
+
## Query es
|
19 |
+
Disease in es : 136183
|
20 |
+
|
21 |
+
|
22 |
+
```
|
23 |
+
PREFIX dbo: <http://dbpedia.org/ontology/>
|
24 |
+
SELECT (COUNT(?disease) AS ?count)
|
25 |
+
WHERE {
|
26 |
+
?disease rdf:type dbo:Disease ;
|
27 |
+
rdfs:label ?diseaseLabel ;
|
28 |
+
dbo:abstract ?abstract .
|
29 |
+
FILTER(LANG(?diseaseLabel) = 'es' && LANG(?abstract) = 'es')
|
30 |
+
}
|
31 |
+
|
32 |
+
```
|
wikipedia_datasets/alvaro-analysis.md
ADDED
@@ -0,0 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Spanish Medical LLM
|
2 |
+
|
3 |
+
## Similars Spanish LLMS
|
4 |
+
|
5 |
+
“Sequence-to-Sequence Spanish Pre-trained Language Models”
|
6 |
+
|
7 |
+
**Pretrained data:**
|
8 |
+
|
9 |
+
- OSCAR 21.09 corpus, which includes a deduplicated Spanish dataset of approximately 160GB of text
|
10 |
+
- mC4-es corpus (Xue et al., 2021), extensive 500GB text
|
11 |
+
- Spanish Wikipedia dump ttext from diverse sources, 10GB
|
12 |
+
- The ones from BSC but are BERT style.
|
13 |
+
|
14 |
+
**Finetunning data:**
|
15 |
+
|
16 |
+
- MLQA (Lewis et al., 2019) and SQAC (Gutiérrez-Fandiño et al., 2021) datasets
|
17 |
+
for this evaluation.
|
18 |
+
- MLQA presents a collection of parallel multi-lingual articles extracted from Wikipedia and offers a development set and test set professionally translated into Spanish.
|
19 |
+
- SQAC was created exclusively for Spanish evaluation and contains articles extracted purely from Spanish sources.
|
20 |
+
|
21 |
+
## Meditron
|
22 |
+
|
23 |
+
Meditron is an open-source suite of medical Large Language Models (LLMs). It includes Meditron-7B and Meditron-70B, both adapted to the medical domain through pretraining on a curated medical corpus. Meditron-70B, when finetuned on relevant data, surpasses Llama-2-70B, GPT-3.5, and Flan-PaLM in various medical reasoning tasks.
|
24 |
+
|
25 |
+
The code is designed to operate in a distributed environment, with certain sections serving as a starting point. The evaluation specifically focuses on English language capabilities.
|
26 |
+
|
27 |
+
# Evaluation
|
28 |
+
|
29 |
+
Fine-tuning a Large Language Model (LLM) in the medical context is a sophisticated task that requires careful consideration of model architecture, data, and ethical considerations. The approach you choose will depend on the specific tasks you wish to perform with your model, such as information extraction, patient triage, generating medical reports, or answering medical questions. Downstream evaluation metrics differ based on the transformer architecture.
|
30 |
+
|
31 |
+
### 1. Only Encoder (BERT-like Models)
|
32 |
+
|
33 |
+
### Use Cases:
|
34 |
+
|
35 |
+
- **Named Entity Recognition (NER):** Identifying medical terms, medication names, and other specific entities in text.
|
36 |
+
- **Sentiment Analysis**: Classification of text.
|
37 |
+
|
38 |
+
### Approach:
|
39 |
+
|
40 |
+
- **Data Preparation:** Collect a diverse dataset of medical texts.
|
41 |
+
- **Preprocessing:** Normalize the medical texts (e.g., lowercasing, removing special characters) and annotate your data for the specific task, if necessary.
|
42 |
+
- **Fine-Tuning:** Use a pre-trained BERT model and fine-tune it on your medical dataset. You may need to adjust the model's architecture slightly depending on your task, such as adding a classification layer for NER or classification tasks.
|
43 |
+
|
44 |
+
### 2. Decoder-Only (GPT-like Models)
|
45 |
+
|
46 |
+
### Use Cases:
|
47 |
+
|
48 |
+
- **Generating Medical Text:** Generating discharge summaries, patient instructions, or creating medical content.
|
49 |
+
- **Question Answering:** Providing answers to medical questions based on a large corpus of medical knowledge.
|
50 |
+
- **Dialogue Systems:** Powering conversational agents for patient engagement or support.
|
51 |
+
|
52 |
+
### Approach:
|
53 |
+
|
54 |
+
- **Data Preparation:** Assemble a large corpus of medical texts, including dialogues (if available), Q&A pairs, and general medical information.
|
55 |
+
- **Preprocessing:** Similar to the BERT approach but ensure the texts are suitable for generative tasks.
|
56 |
+
- **Fine-Tuning:** Use a pre-trained GPT model and fine-tune it on your dataset. You may experiment with different prompts and fine-tuning strategies to improve performance on generative tasks.
|
57 |
+
|
58 |
+
### 3. Encoder-Decoder (T5, BART-like Models)
|
59 |
+
|
60 |
+
### Use Cases:
|
61 |
+
|
62 |
+
- **Translation:** Translating medical documents between languages.
|
63 |
+
- **Summarization:** Generating concise summaries of lengthy medical texts or patient histories.
|
64 |
+
- **Question Answering:** Especially for complex queries that require understanding and synthesizing information from multiple sources.
|
65 |
+
|
66 |
+
### Approach:
|
67 |
+
|
68 |
+
- **Data Preparation:** Collect a dataset that suits your specific task, such as parallel corpora for translation or long texts with summaries.
|
69 |
+
- **Preprocessing:** Prepare and clean your data, ensuring that it is in a format suitable for both encoding and decoding tasks.
|
70 |
+
- **Fine-Tuning:** Use a pre-trained model like T5 or BART and fine-tune it on your specific dataset. Tailor the input and output formats to match your task, such as "translate English to French" for translation tasks.
|
71 |
+
|
72 |
+
### Ethical and Fair Approach Considerations:
|
73 |
+
|
74 |
+
- **Bias and Fairness:** Be aware of and actively mitigate biases in your dataset and model. This includes biases related to gender, ethnicity, and age.
|
75 |
+
- **Data Privacy:** Ensure that the data used for training and fine-tuning respects patient confidentiality and complies with regulations.
|
76 |
+
- **Model Transparency:** Document the data sources, model decisions, and any limitations of your model.
|
77 |
+
|
78 |
+
## Tools
|
79 |
+
|
80 |
+
### Medplexity
|
81 |
+
|
82 |
+
Medplexity is a framework designed to explore the capabilities of LLMs in the medical domain. We achieve this by providing interfaces and collections of common benchmarks, LLMs, and prompts.
|
83 |
+
|
84 |
+
Medplexity automatically create the prompts for evaluation in QA. The prompts are in English, and the samples use the OpenAI API. For the Spanish case, I believe we should create our own framework.
|
85 |
+
|
86 |
+
### Olmo
|
87 |
+
|
88 |
+
Used insise catwalk and tango.
|
89 |
+
|
90 |
+
[https://github.com/allenai/OLMo-Eval](https://github.com/allenai/OLMo-Eval)
|
91 |
+
|
92 |
+
### Cawalk
|
93 |
+
|
94 |
+
[https://github.com/allenai/catwalk](https://github.com/allenai/catwalk)
|
95 |
+
|
96 |
+
### Tango
|
97 |
+
|
98 |
+
AI2 Tango replaces messy directories and spreadsheets full of file versions by organizing experiments into discrete steps that can be cached and reused throughout the lifetime of a research project.
|
99 |
+
|
100 |
+
[https://github.com/allenai/tango](https://github.com/allenai/tango)
|
101 |
+
|
102 |
+
### Deep Eval
|
103 |
+
|
104 |
+
**DeepEval** is a simple-to-use, open-source LLM evaluation framework. It is similar to Pytest but specialized for unit testing LLM outputs. DeepEval incorporates the latest research to evaluate LLM outputs based on metrics such as hallucination, answer relevancy, RAGAS, etc., which uses LLMs and various other NLP models that runs **locally on your machine** for evaluation.
|
105 |
+
|
106 |
+
Can evaluate ***local deploy*** models with this metrics:
|
107 |
+
|
108 |
+
- **G-Eval**: General performance evaluation across multiple tasks or criteria.
|
109 |
+
- **Summarization**: Assessing the ability to create concise and relevant summaries.
|
110 |
+
- **Answer Relevancy**: Measuring the relevance and accuracy of model responses to prompts.
|
111 |
+
- **Faithfulness**: Ensuring output accuracy and fidelity to the source material or input data.
|
112 |
+
- **Contextual Recall**: Evaluating the model's use of relevant information from the given context.
|
113 |
+
- **Contextual Precision**: Assessing the specificity and accuracy of the model's output in relation to the task context.
|
114 |
+
- **RAGAS**: Evaluating retrieval-augmented generation models for effective information use and text generation.
|
115 |
+
- **Hallucination**: Identifying instances where the model generates unsupported or false information.
|
116 |
+
- **Toxicity**: Measuring the propensity to produce harmful or inappropriate content.
|
117 |
+
- **Bias**: Assessing the presence of unfair prejudices or stereotypes in model outputs.
|
118 |
+
|
119 |
+
[https://github.com/confident-ai/deepeval](https://github.com/confident-ai/deepeval)
|
120 |
+
|
121 |
+
## Based on task
|
122 |
+
|
123 |
+
Language is very complex, and the best evaluation is always a manual human one. However, other options exist:
|
124 |
+
|
125 |
+
**Question Answering (QA):**
|
126 |
+
|
127 |
+
- **Accuracy:** The percentage of questions answered correctly. This is a good starting point, but doesn't capture nuances.
|
128 |
+
- **F1 Score:** Combines precision (ratio of correct answers to retrieved answers) and recall (ratio of correct answers to all possible answers) into a single metric.
|
129 |
+
- **Mean Reciprocal Rank (MRR):** The average inverse of the rank of the first correct answer.
|
130 |
+
|
131 |
+
**Sentiment Analysis:**
|
132 |
+
|
133 |
+
- **Accuracy:** The percentage of correctly classified sentiment (positive, negative, or neutral).
|
134 |
+
- **Precision, Recall, and F1 Score:** Similar to QA, but for each sentiment class.
|
135 |
+
- **Error Analysis:** Examining misclassified examples to identify areas for improvement.
|
136 |
+
|
137 |
+
**Text Summarization:**
|
138 |
+
|
139 |
+
- **ROUGE Score:** Measures overlap in n-grams (sequences of n words) between the generated summary and reference summaries.
|
140 |
+
- **BLEU Score:** Similar to ROUGE, but with additional factors like brevity penalty.
|
141 |
+
|
142 |
+
**Text Generation (Depends on the specific task):**
|
143 |
+
|
144 |
+
- **Grammatical Correctness and Fluency:** How well-formed and natural the generated text reads.
|
145 |
+
- **Creativity and Coherence:** Does the generated text make sense and flow logically?
|
146 |
+
|
147 |
+
## Benchmarks
|
148 |
+
|
149 |
+
Each of these tasks and datasets targets specific capabilities of language models, from understanding and generating natural language to performing specialized reasoning and knowledge application across various domains, including general knowledge, science, mathematics, and ethics.
|
150 |
+
|
151 |
+
### General Tasks and Datasets
|
152 |
+
|
153 |
+
- **WikiText**: A dataset for language modeling tasks, consisting of articles from Wikipedia. It's used for training models on a wide range of topics for better text generation and understanding.
|
154 |
+
- **PIQA (Physical Interaction Question Answering)**: Focuses on reasoning about physical interactions with objects, requiring models to predict the outcome of physical actions in a given scenario.
|
155 |
+
- **SQuAD (Stanford Question Answering Dataset)**: A benchmark dataset for machine reading comprehension, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text from the corresponding reading passage.
|
156 |
+
|
157 |
+
### SQuAD Shifts
|
158 |
+
|
159 |
+
These are variants of the original SQuAD (Stanford Question Answering Dataset) dataset adapted to different domains to test a model's generalization abilities:
|
160 |
+
|
161 |
+
- **SQuADShifts-Reddit, Amazon, NYT, New-Wiki**: Adaptations of the SQuAD dataset using content from Reddit, Amazon reviews, The New York Times articles, and newer Wikipedia articles, respectively.
|
162 |
+
|
163 |
+
### MRQA (Machine Reading for Question Answering) Datasets
|
164 |
+
|
165 |
+
A collection of datasets compiled for the MRQA shared task, aimed at evaluating models across a diverse set of reading comprehension tasks:
|
166 |
+
|
167 |
+
- **RACE, NewsQA, TriviaQA, SearchQA, HotpotQA, NaturalQuestions, BioASQ, DROP, RelationExtraction, TextbookQA, Duorc.ParaphraseRC**: Each focuses on different aspects of reading comprehension, such as multiple-choice questions (RACE), news articles (NewsQA), trivia knowledge (TriviaQA), web search results (SearchQA), multi-hop reasoning (HotpotQA), real user queries (NaturalQuestions), biomedical questions (BioASQ), discrete reasoning over paragraphs (DROP), etc.
|
168 |
+
|
169 |
+
### Other Tasks and Datasets
|
170 |
+
|
171 |
+
- **SQuAD2**: An extension of SQuAD that includes unanswerable questions, making the task more challenging.
|
172 |
+
- **RTE (Recognizing Textual Entailment)**: Involves determining whether a given text logically follows from another text.
|
173 |
+
- **SuperGLUE::RTE, CoLA, MNLI, MRPC, QNLI, QQP, SST, WNLI, BoolQ, etc.**: Part of the SuperGLUE benchmark, these tasks involve various aspects of language understanding, such as entailment, grammaticality, natural language inference, paraphrase detection, question answering, and sentiment analysis.
|
174 |
+
- **LAMBADA**: A dataset for evaluating the capabilities of models in predicting the final word of a text passage, designed to test the understanding of context.
|
175 |
+
- **PubMedQA**: A dataset for biomedical question answering.
|
176 |
+
- **SciQ**: Focused on science exam questions.
|
177 |
+
- **QA4MRE**: A series of tasks from the Question Answering for Machine Reading Evaluation challenge, spanning several years.
|
178 |
+
- **ANLI (Adversarial NLI)**: A series of datasets for testing natural language understanding in an adversarial setting.
|
179 |
+
- **Ethics**: Tasks related to evaluating models on ethical reasoning, including deontology, justice, utilitarianism, and virtue ethics.
|
180 |
+
- **MathQA, Arithmetic, Anagrams, etc.**: Datasets focusing on mathematical reasoning, arithmetic operations, and word manipulation tasks.
|
wikipedia_datasets/db_pedia.ipynb
ADDED
@@ -0,0 +1,245 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "code",
|
5 |
+
"execution_count": 10,
|
6 |
+
"metadata": {},
|
7 |
+
"outputs": [
|
8 |
+
{
|
9 |
+
"name": "stdout",
|
10 |
+
"output_type": "stream",
|
11 |
+
"text": [
|
12 |
+
"Requirement already satisfied: sparqlwrapper in /home/alvaro/Desktop/ME/SpanishMedicaLLM/.venv/lib/python3.10/site-packages (2.0.0)\n",
|
13 |
+
"Requirement already satisfied: rdflib>=6.1.1 in /home/alvaro/Desktop/ME/SpanishMedicaLLM/.venv/lib/python3.10/site-packages (from sparqlwrapper) (7.0.0)\n",
|
14 |
+
"Requirement already satisfied: isodate<0.7.0,>=0.6.0 in /home/alvaro/Desktop/ME/SpanishMedicaLLM/.venv/lib/python3.10/site-packages (from rdflib>=6.1.1->sparqlwrapper) (0.6.1)\n",
|
15 |
+
"Requirement already satisfied: pyparsing<4,>=2.1.0 in /home/alvaro/Desktop/ME/SpanishMedicaLLM/.venv/lib/python3.10/site-packages (from rdflib>=6.1.1->sparqlwrapper) (3.1.2)\n",
|
16 |
+
"Requirement already satisfied: six in /home/alvaro/Desktop/ME/SpanishMedicaLLM/.venv/lib/python3.10/site-packages (from isodate<0.7.0,>=0.6.0->rdflib>=6.1.1->sparqlwrapper) (1.16.0)\n"
|
17 |
+
]
|
18 |
+
}
|
19 |
+
],
|
20 |
+
"source": [
|
21 |
+
"!pip install sparqlwrapper\n"
|
22 |
+
]
|
23 |
+
},
|
24 |
+
{
|
25 |
+
"cell_type": "code",
|
26 |
+
"execution_count": 11,
|
27 |
+
"metadata": {},
|
28 |
+
"outputs": [],
|
29 |
+
"source": [
|
30 |
+
"from SPARQLWrapper import SPARQLWrapper, JSON"
|
31 |
+
]
|
32 |
+
},
|
33 |
+
{
|
34 |
+
"cell_type": "code",
|
35 |
+
"execution_count": 12,
|
36 |
+
"metadata": {},
|
37 |
+
"outputs": [],
|
38 |
+
"source": [
|
39 |
+
"\n",
|
40 |
+
"# Set the DBpedia SPARQL endpoint\n",
|
41 |
+
"sparql = SPARQLWrapper(\"http://dbpedia.org/sparql\")"
|
42 |
+
]
|
43 |
+
},
|
44 |
+
{
|
45 |
+
"cell_type": "code",
|
46 |
+
"execution_count": 13,
|
47 |
+
"metadata": {},
|
48 |
+
"outputs": [],
|
49 |
+
"source": [
|
50 |
+
"\n",
|
51 |
+
"# Define your SPARQL query\n",
|
52 |
+
"query = \"\"\"\n",
|
53 |
+
"PREFIX dbo: <http://dbpedia.org/ontology/>\n",
|
54 |
+
"\n",
|
55 |
+
"SELECT ?disease ?diseaseLabel ?abstract\n",
|
56 |
+
"WHERE {\n",
|
57 |
+
" ?disease rdf:type dbo:Disease ;\n",
|
58 |
+
" rdfs:label ?diseaseLabel ;\n",
|
59 |
+
" dbo:abstract ?abstract .\n",
|
60 |
+
" FILTER(LANG(?diseaseLabel) = 'es' && LANG(?abstract) = 'es')\n",
|
61 |
+
"}\n",
|
62 |
+
"\"\"\"\n",
|
63 |
+
"\n",
|
64 |
+
"# Set the query and response format\n",
|
65 |
+
"sparql.setQuery(query)\n",
|
66 |
+
"sparql.setReturnFormat(JSON)\n",
|
67 |
+
"\n",
|
68 |
+
"# Execute the query\n",
|
69 |
+
"results = sparql.query().convert()"
|
70 |
+
]
|
71 |
+
},
|
72 |
+
{
|
73 |
+
"cell_type": "code",
|
74 |
+
"execution_count": 14,
|
75 |
+
"metadata": {},
|
76 |
+
"outputs": [
|
77 |
+
{
|
78 |
+
"data": {
|
79 |
+
"text/plain": [
|
80 |
+
"10000"
|
81 |
+
]
|
82 |
+
},
|
83 |
+
"execution_count": 14,
|
84 |
+
"metadata": {},
|
85 |
+
"output_type": "execute_result"
|
86 |
+
}
|
87 |
+
],
|
88 |
+
"source": [
|
89 |
+
"results_blind = results[\"results\"][\"bindings\"]\n",
|
90 |
+
"len(results_blind)"
|
91 |
+
]
|
92 |
+
},
|
93 |
+
{
|
94 |
+
"cell_type": "code",
|
95 |
+
"execution_count": 15,
|
96 |
+
"metadata": {},
|
97 |
+
"outputs": [
|
98 |
+
{
|
99 |
+
"name": "stdout",
|
100 |
+
"output_type": "stream",
|
101 |
+
"text": [
|
102 |
+
"Disease: Acoplamiento de Cadiot-Chodkiewicz\n",
|
103 |
+
"Abstract: El acoplamiento de Cadiot-Chodkiewicz es una reacción orgánica que consiste en el acoplamiento entre un alquino terminal y un haloalquino catalizada por una sal cuprosa Cu(I) como por ejemplo el bromuro de cobre (I) y una amina como base. El producto de la reacción es un dialaquino.\n",
|
104 |
+
"\n",
|
105 |
+
"Disease: Cesárea\n",
|
106 |
+
"Abstract: Una cesárea es un tipo de intervención quirúrgica el cual se realiza una incisión quirúrgica en el abdomen (laparotomía) y el útero de la madre para extraer uno o más bebés. La OMS recomienda su uso cuando sea necesaria para salvar la vida de las madres y los neonatos por razones médicas, pero puede aumentar el riesgo de complicaciones por ser un procedimiento de cirugía mayor, y estima que el porcentaje de cesáreas en una región no debería superar el 15 %. Actualmente las tasas de cesárea están aumentando en la mayoría de países muy por encima de las tasas recomendadas por la OMS en todos los rangos de edad, sobre todo en los países más ricos. No se debe confundir con la episiotomía, que es una incisión en el periné para facilitar el parto. La cesárea se hace por encima de la pelvis. Contrariamente a lo sostenido por algunos autores, la palabra «cesárea» no tiene nada que ver con Julio César, ni este nació por medio de esa cirugía.\n",
|
107 |
+
"\n"
|
108 |
+
]
|
109 |
+
}
|
110 |
+
],
|
111 |
+
"source": [
|
112 |
+
"for result in results_blind[:2]:\n",
|
113 |
+
" disease_label = result[\"diseaseLabel\"][\"value\"]\n",
|
114 |
+
" abstract = result[\"abstract\"][\"value\"]\n",
|
115 |
+
" print(f\"Disease: {disease_label}\\nAbstract: {abstract}\\n\")"
|
116 |
+
]
|
117 |
+
},
|
118 |
+
{
|
119 |
+
"cell_type": "code",
|
120 |
+
"execution_count": 16,
|
121 |
+
"metadata": {},
|
122 |
+
"outputs": [],
|
123 |
+
"source": [
|
124 |
+
"def get_disease_names():\n",
|
125 |
+
" \n",
|
126 |
+
" # Define the SPARQL query to retrieve disease names\n",
|
127 |
+
" query = \"\"\"\n",
|
128 |
+
" PREFIX dbo: <http://dbpedia.org/ontology/>\n",
|
129 |
+
"\n",
|
130 |
+
" SELECT DISTINCT ?diseaseLabel\n",
|
131 |
+
" WHERE {\n",
|
132 |
+
" ?disease rdf:type dbo:Disease ;\n",
|
133 |
+
" rdfs:label ?diseaseLabel .\n",
|
134 |
+
" FILTER(LANG(?diseaseLabel) = 'es')\n",
|
135 |
+
" }\n",
|
136 |
+
" \"\"\"\n",
|
137 |
+
" \n",
|
138 |
+
" # Set the query and response format\n",
|
139 |
+
" sparql.setQuery(query)\n",
|
140 |
+
" sparql.setReturnFormat(JSON)\n",
|
141 |
+
" \n",
|
142 |
+
" # Execute the query\n",
|
143 |
+
" results = sparql.query().convert()\n",
|
144 |
+
" \n",
|
145 |
+
" # Extract and return the disease names\n",
|
146 |
+
" disease_names = [result[\"diseaseLabel\"][\"value\"] for result in results[\"results\"][\"bindings\"]]\n",
|
147 |
+
" return disease_names\n"
|
148 |
+
]
|
149 |
+
},
|
150 |
+
{
|
151 |
+
"cell_type": "code",
|
152 |
+
"execution_count": 18,
|
153 |
+
"metadata": {},
|
154 |
+
"outputs": [],
|
155 |
+
"source": [
|
156 |
+
"# Example usage\n",
|
157 |
+
"disease_names = get_disease_names()"
|
158 |
+
]
|
159 |
+
},
|
160 |
+
{
|
161 |
+
"cell_type": "code",
|
162 |
+
"execution_count": 19,
|
163 |
+
"metadata": {},
|
164 |
+
"outputs": [
|
165 |
+
{
|
166 |
+
"name": "stdout",
|
167 |
+
"output_type": "stream",
|
168 |
+
"text": [
|
169 |
+
"7223\n"
|
170 |
+
]
|
171 |
+
}
|
172 |
+
],
|
173 |
+
"source": [
|
174 |
+
"print(len(disease_names))"
|
175 |
+
]
|
176 |
+
},
|
177 |
+
{
|
178 |
+
"cell_type": "code",
|
179 |
+
"execution_count": null,
|
180 |
+
"metadata": {},
|
181 |
+
"outputs": [
|
182 |
+
{
|
183 |
+
"name": "stdout",
|
184 |
+
"output_type": "stream",
|
185 |
+
"text": [
|
186 |
+
"List of Disease Names:\n",
|
187 |
+
"Acoplamiento de Cadiot-Chodkiewicz\n",
|
188 |
+
"Cesárea\n",
|
189 |
+
"Enfermedad por depósito de pirofosfato de calcio\n",
|
190 |
+
"Campilobacteriosis\n",
|
191 |
+
"Neoplasia\n",
|
192 |
+
"Cáncer\n",
|
193 |
+
"Candidiasis\n",
|
194 |
+
"Moquillo\n",
|
195 |
+
"Reacción de Cannizzaro\n",
|
196 |
+
"Síndrome de Capgras\n"
|
197 |
+
]
|
198 |
+
}
|
199 |
+
],
|
200 |
+
"source": [
|
201 |
+
"\n",
|
202 |
+
"# Print the retrieved disease names\n",
|
203 |
+
"print(\"List of Disease Names:\")\n",
|
204 |
+
"for name in disease_names[:10]:\n",
|
205 |
+
" print(name)"
|
206 |
+
]
|
207 |
+
},
|
208 |
+
{
|
209 |
+
"cell_type": "code",
|
210 |
+
"execution_count": null,
|
211 |
+
"metadata": {},
|
212 |
+
"outputs": [],
|
213 |
+
"source": [
|
214 |
+
"import json\n",
|
215 |
+
"\n",
|
216 |
+
"file_path = 'enfermedades.json'\n",
|
217 |
+
"\n",
|
218 |
+
"dic_names = {\"enfermedades\": disease_names}\n",
|
219 |
+
"with open(file_path, 'w', encoding=\"utf-8\") as file:\n",
|
220 |
+
" loaded_list = json.dump(dic_names, file, ensure_ascii=False, indent=2)\n"
|
221 |
+
]
|
222 |
+
}
|
223 |
+
],
|
224 |
+
"metadata": {
|
225 |
+
"kernelspec": {
|
226 |
+
"display_name": ".venv",
|
227 |
+
"language": "python",
|
228 |
+
"name": "python3"
|
229 |
+
},
|
230 |
+
"language_info": {
|
231 |
+
"codemirror_mode": {
|
232 |
+
"name": "ipython",
|
233 |
+
"version": 3
|
234 |
+
},
|
235 |
+
"file_extension": ".py",
|
236 |
+
"mimetype": "text/x-python",
|
237 |
+
"name": "python",
|
238 |
+
"nbconvert_exporter": "python",
|
239 |
+
"pygments_lexer": "ipython3",
|
240 |
+
"version": "3.10.12"
|
241 |
+
}
|
242 |
+
},
|
243 |
+
"nbformat": 4,
|
244 |
+
"nbformat_minor": 2
|
245 |
+
}
|
wikipedia_datasets/enfermedades_long.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
wikipedia_datasets/process_corpus.py
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from pathlib import Path
|
2 |
+
import os
|
3 |
+
import numpy as np
|
4 |
+
|
5 |
+
import os
|
6 |
+
import time
|
7 |
+
import math
|
8 |
+
from huggingface_hub import login
|
9 |
+
from datasets import load_dataset, concatenate_datasets
|
10 |
+
from functools import reduce
|
11 |
+
import pandas as pd
|
12 |
+
|
13 |
+
# Load model directly
|
14 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
15 |
+
|
16 |
+
HF_TOKEN = ''
|
17 |
+
DATASET_TO_LOAD = 'PlanTL-GOB-ES/pharmaconer'
|
18 |
+
DATASET_TO_UPDATE = 'somosnlp/spanish_medica_llm'
|
19 |
+
|
20 |
+
CSV_FILE_NAME = "enfermedades_long.csv"
|
21 |
+
|
22 |
+
#Loggin to Huggin Face
|
23 |
+
login(token = HF_TOKEN)
|
24 |
+
|
25 |
+
dataset_CODING = load_dataset(DATASET_TO_LOAD)
|
26 |
+
dataset_CODING
|
27 |
+
royalListOfCode = {}
|
28 |
+
issues_path = 'dataset'
|
29 |
+
tokenizer = AutoTokenizer.from_pretrained("DeepESP/gpt2-spanish-medium")
|
30 |
+
DATASET_SOURCE_ID = '7'
|
31 |
+
#Read current path
|
32 |
+
path = Path(__file__).parent.absolute()
|
33 |
+
|
34 |
+
def readCsvFIle():
|
35 |
+
"""
|
36 |
+
"""
|
37 |
+
cantemistDstDict = {
|
38 |
+
'raw_text': '',
|
39 |
+
'topic': '',
|
40 |
+
'speciallity': '',
|
41 |
+
'raw_text_type': 'question',
|
42 |
+
'topic_type': '',
|
43 |
+
'source': DATASET_SOURCE_ID,
|
44 |
+
'country': '',
|
45 |
+
'document_id': ''
|
46 |
+
}
|
47 |
+
|
48 |
+
totalOfTokens = 0
|
49 |
+
corpusToLoad = []
|
50 |
+
countCopySeveralDocument = 0
|
51 |
+
counteOriginalDocument = 0
|
52 |
+
idFile = 0
|
53 |
+
path = Path(__file__).parent.absolute()
|
54 |
+
both_diagnostic_tratamient = open_text = type_tratamient = type_diagnostic = both_diagnostic_tratamient = 0
|
55 |
+
df = pd.read_csv(f"{str(path)+ os.sep + CSV_FILE_NAME}",encoding='utf8')
|
56 |
+
df = df.replace({np.nan: None})
|
57 |
+
print(df.columns)
|
58 |
+
|
59 |
+
for i in range(len(df)):
|
60 |
+
|
61 |
+
counteOriginalDocument += 1
|
62 |
+
newCorpusRow = cantemistDstDict.copy()
|
63 |
+
idFile += 1
|
64 |
+
text = df.loc[i, 'Abstract']
|
65 |
+
|
66 |
+
newCorpusRow['speciallity'] = df.loc[i, 'Enfermedad'] if df.loc[i, 'Enfermedad'] != None else ''
|
67 |
+
|
68 |
+
listOfTokens = tokenizer.tokenize(text)
|
69 |
+
currentSizeOfTokens = len(listOfTokens)
|
70 |
+
totalOfTokens += currentSizeOfTokens
|
71 |
+
|
72 |
+
newCorpusRow['raw_text'] = text
|
73 |
+
newCorpusRow['document_id'] = str(idFile)
|
74 |
+
|
75 |
+
if df.loc[i, 'Tratamiento'] == None and df.loc[i, 'Diagnostico'] == None:
|
76 |
+
open_text += 1
|
77 |
+
newCorpusRow['topic_type'] = 'open_text'
|
78 |
+
newCorpusRow['raw_text_type'] = 'open_text'
|
79 |
+
elif df.loc[i, 'Tratamiento'] != None and df.loc[i, 'Diagnostico'] == None:
|
80 |
+
type_tratamient += 1
|
81 |
+
newCorpusRow['topic_type'] = 'medical_diagnostic'
|
82 |
+
newCorpusRow['topic'] = df.loc[i, 'Tratamiento']
|
83 |
+
elif df.loc[i, 'Tratamiento'] == None and df.loc[i, 'Diagnostico'] != None:
|
84 |
+
type_diagnostic += 1
|
85 |
+
newCorpusRow['topic_type'] = 'medical_topic'
|
86 |
+
newCorpusRow['topic'] = df.loc[i, 'Diagnostico']
|
87 |
+
elif df.loc[i, 'Tratamiento'] != None and df.loc[i, 'Diagnostico'] != None:
|
88 |
+
both_diagnostic_tratamient += 1
|
89 |
+
tratmentCorpusRow = newCorpusRow.copy()
|
90 |
+
|
91 |
+
newCorpusRow['topic_type'] = 'medical_diagnostic'
|
92 |
+
newCorpusRow['topic'] = df.loc[i, 'Diagnostico']
|
93 |
+
|
94 |
+
tratmentCorpusRow['topic_type'] = 'medical_topic'
|
95 |
+
tratmentCorpusRow['topic'] = df.loc[i, 'Tratamiento']
|
96 |
+
corpusToLoad.append(tratmentCorpusRow)
|
97 |
+
|
98 |
+
corpusToLoad.append(newCorpusRow)
|
99 |
+
#print(df.loc[i, "Abstract"], df.loc[i, "Diagnostico"])
|
100 |
+
print(" Size with Open Text " + str(open_text))
|
101 |
+
print(" Size with only tratamient " + str(type_tratamient))
|
102 |
+
print(" Size with only diagnosti " + str(type_diagnostic))
|
103 |
+
print(" Size with both tratamient and diagnosti " + str(both_diagnostic_tratamient))
|
104 |
+
|
105 |
+
dfToHub = pd.DataFrame.from_records(corpusToLoad)
|
106 |
+
|
107 |
+
if os.path.exists(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl"):
|
108 |
+
os.remove(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl")
|
109 |
+
|
110 |
+
|
111 |
+
dfToHub.to_json(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", orient="records", lines=True)
|
112 |
+
print(
|
113 |
+
f"Downloaded all the issues for {DATASET_TO_LOAD}! Dataset stored at {issues_path}/spanish_medical_llms.jsonl"
|
114 |
+
)
|
115 |
+
|
116 |
+
print(' On dataset there are as document ', counteOriginalDocument)
|
117 |
+
print(' On dataset there are as copy document ', countCopySeveralDocument)
|
118 |
+
print(' On dataset there are as size of Tokens ', totalOfTokens)
|
119 |
+
file = Path(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl") # or Path('./doc.txt')
|
120 |
+
size = file.stat().st_size
|
121 |
+
print ('File size on Kilobytes (kB)', size >> 10) # 5242880 kilobytes (kB)
|
122 |
+
print ('File size on Megabytes (MB)', size >> 20 ) # 5120 megabytes (MB)
|
123 |
+
print ('File size on Gigabytes (GB)', size >> 30 ) # 5 gigabytes (GB)
|
124 |
+
|
125 |
+
##Update local dataset with cloud dataset
|
126 |
+
local_spanish_dataset = load_dataset("json", data_files=f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", split="train")
|
127 |
+
|
128 |
+
print ('<== Local Dataset ==> ')
|
129 |
+
print(local_spanish_dataset)
|
130 |
+
|
131 |
+
try:
|
132 |
+
spanish_dataset = load_dataset(DATASET_TO_UPDATE, split="train")
|
133 |
+
spanish_dataset = concatenate_datasets([spanish_dataset, local_spanish_dataset])
|
134 |
+
print('<--- Copy files --->')
|
135 |
+
except Exception:
|
136 |
+
spanish_dataset = local_spanish_dataset
|
137 |
+
|
138 |
+
spanish_dataset.push_to_hub(DATASET_TO_UPDATE)
|
139 |
+
|
140 |
+
print(spanish_dataset)
|
141 |
+
readCsvFIle()
|
142 |
+
|
143 |
+
|
144 |
+
|
145 |
+
|
146 |
+
|
wikipedia_datasets/using_dataset_hugginface.py
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
"""using_dataset_hugginface.ipynb
|
3 |
+
|
4 |
+
Automatically generated by Colaboratory.
|
5 |
+
|
6 |
+
Original file is located at
|
7 |
+
https://colab.research.google.com/drive/1soGxkZu4antYbYG23GioJ6zoSt_GhSNT
|
8 |
+
"""
|
9 |
+
|
10 |
+
"""**Hugginface loggin for push on Hub**"""
|
11 |
+
###
|
12 |
+
#
|
13 |
+
# Used bibliografy:
|
14 |
+
# https://huggingface.co/learn/nlp-course/chapter5/5
|
15 |
+
#
|
16 |
+
###
|
17 |
+
|
18 |
+
import os
|
19 |
+
import time
|
20 |
+
import math
|
21 |
+
from huggingface_hub import login
|
22 |
+
from datasets import load_dataset, concatenate_datasets
|
23 |
+
from functools import reduce
|
24 |
+
from pathlib import Path
|
25 |
+
import pandas as pd
|
26 |
+
import mysql.connector
|
27 |
+
|
28 |
+
# Load model directly
|
29 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
30 |
+
|
31 |
+
HF_TOKEN = ''
|
32 |
+
DATASET_TO_LOAD = 'PlanTL-GOB-ES/pharmaconer'
|
33 |
+
DATASET_TO_UPDATE = 'somosnlp/spanish_medica_llm'
|
34 |
+
|
35 |
+
#Loggin to Huggin Face
|
36 |
+
login(token = HF_TOKEN)
|
37 |
+
|
38 |
+
dataset_CODING = load_dataset(DATASET_TO_LOAD)
|
39 |
+
dataset_CODING
|
40 |
+
royalListOfCode = {}
|
41 |
+
issues_path = 'dataset'
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained("DeepESP/gpt2-spanish-medium")
|
43 |
+
DATASET_SOURCE_ID = '3'
|
44 |
+
#Read current path
|
45 |
+
path = Path(__file__).parent.absolute()
|
46 |
+
|
47 |
+
'''
|
48 |
+
Bibliografy:
|
49 |
+
https://www.w3schools.com/python/python_mysql_getstarted.asp
|
50 |
+
https://www.w3schools.com/python/python_mysql_select.as
|
51 |
+
|
52 |
+
'''
|
53 |
+
mydb = mysql.connector.connect(
|
54 |
+
host="localhost",
|
55 |
+
user="root",
|
56 |
+
password="",
|
57 |
+
database="icd10_dx_hackatonnlp"
|
58 |
+
)
|
59 |
+
|
60 |
+
|
61 |
+
|
62 |
+
def getCodeDescription(labels_of_type):
|
63 |
+
"""
|
64 |
+
Search description associated with some code
|
65 |
+
in royalListOfCode
|
66 |
+
|
67 |
+
"""
|
68 |
+
icd10CodeDict = {}
|
69 |
+
mycursor = mydb.cursor()
|
70 |
+
codeIcd10 = ''
|
71 |
+
|
72 |
+
for iValue in labels_of_type:
|
73 |
+
codeIcd10 = iValue
|
74 |
+
|
75 |
+
if codeIcd10.find('.') == -1:
|
76 |
+
codeIcd10 += '.0'
|
77 |
+
|
78 |
+
mycursor.execute(f"SELECT dx_code, long_desc FROM `icd10_dx_order_code` WHERE dx_code = '{codeIcd10}' LIMIT 1;")
|
79 |
+
|
80 |
+
myresult = mycursor.fetchall()
|
81 |
+
|
82 |
+
for x in myresult:
|
83 |
+
code, description = x
|
84 |
+
icd10CodeDict[code] = description
|
85 |
+
|
86 |
+
return icd10CodeDict
|
87 |
+
|
88 |
+
|
89 |
+
# raw_text: Texto asociado al documento, pregunta, caso clínico u otro tipo de información.
|
90 |
+
|
91 |
+
# topic: (puede ser healthcare_treatment, healthcare_diagnosis, tema, respuesta a pregunta, o estar vacío p.ej en el texto abierto)
|
92 |
+
|
93 |
+
# speciality: (especialidad médica a la que se relaciona el raw_text p.ej: cardiología, cirugía, otros)
|
94 |
+
|
95 |
+
# raw_text_type: (puede ser caso clínico, open_text, question)
|
96 |
+
|
97 |
+
# topic_type: (puede ser medical_topic, medical_diagnostic,answer,natural_medicine_topic, other, o vacio)
|
98 |
+
|
99 |
+
# source: Identificador de la fuente asociada al documento que aparece en el README y descripción del dataset.
|
100 |
+
|
101 |
+
# country: Identificador del país de procedencia de la fuente (p.ej.; ch, es) usando el estándar ISO 3166-1 alfa-2 (Códigos de país de dos letras.).
|
102 |
+
cantemistDstDict = {
|
103 |
+
'raw_text': '',
|
104 |
+
'topic': '',
|
105 |
+
'speciallity': '',
|
106 |
+
'raw_text_type': 'clinic_case',
|
107 |
+
'topic_type': '',
|
108 |
+
'source': DATASET_SOURCE_ID,
|
109 |
+
'country': 'es',
|
110 |
+
'document_id': ''
|
111 |
+
}
|
112 |
+
|
113 |
+
totalOfTokens = 0
|
114 |
+
corpusToLoad = []
|
115 |
+
countCopySeveralDocument = 0
|
116 |
+
counteOriginalDocument = 0
|
117 |
+
|
118 |
+
for iDataset in dataset_CODING:
|
119 |
+
if iDataset == 'train':
|
120 |
+
for item in dataset_CODING[iDataset]:
|
121 |
+
#print ("Element in dataset")
|
122 |
+
idFile = str(item['id'])
|
123 |
+
text = '' if len(item['tokens']) == 0 else reduce(lambda a, b: a + " "+ b, item['tokens'], "")
|
124 |
+
|
125 |
+
#Find topic or diagnosti clasification about the text
|
126 |
+
|
127 |
+
counteOriginalDocument += 1
|
128 |
+
newCorpusRow = cantemistDstDict.copy()
|
129 |
+
|
130 |
+
#print('Current text has ', currentSizeOfTokens)
|
131 |
+
#print('Total of tokens is ', totalOfTokens)
|
132 |
+
|
133 |
+
listOfTokens = tokenizer.tokenize(text)
|
134 |
+
currentSizeOfTokens = len(listOfTokens)
|
135 |
+
totalOfTokens += currentSizeOfTokens
|
136 |
+
|
137 |
+
newCorpusRow['raw_text'] = text
|
138 |
+
newCorpusRow['document_id'] = idFile
|
139 |
+
corpusToLoad.append(newCorpusRow)
|
140 |
+
|
141 |
+
df = pd.DataFrame.from_records(corpusToLoad)
|
142 |
+
|
143 |
+
if os.path.exists(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl"):
|
144 |
+
os.remove(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl")
|
145 |
+
|
146 |
+
|
147 |
+
df.to_json(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", orient="records", lines=True)
|
148 |
+
print(
|
149 |
+
f"Downloaded all the issues for {DATASET_TO_LOAD}! Dataset stored at {issues_path}/spanish_medical_llms.jsonl"
|
150 |
+
)
|
151 |
+
|
152 |
+
print(' On dataset there are as document ', counteOriginalDocument)
|
153 |
+
print(' On dataset there are as copy document ', countCopySeveralDocument)
|
154 |
+
print(' On dataset there are as size of Tokens ', totalOfTokens)
|
155 |
+
file = Path(f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl") # or Path('./doc.txt')
|
156 |
+
size = file.stat().st_size
|
157 |
+
print ('File size on Kilobytes (kB)', size >> 10) # 5242880 kilobytes (kB)
|
158 |
+
print ('File size on Megabytes (MB)', size >> 20 ) # 5120 megabytes (MB)
|
159 |
+
print ('File size on Gigabytes (GB)', size >> 30 ) # 5 gigabytes (GB)
|
160 |
+
|
161 |
+
##Update local dataset with cloud dataset
|
162 |
+
local_spanish_dataset = load_dataset("json", data_files=f"{str(path)}/{issues_path}/spanish_medical_llms.jsonl", split="train")
|
163 |
+
|
164 |
+
print (' Local Dataset ==> ')
|
165 |
+
print(local_spanish_dataset)
|
166 |
+
|
167 |
+
try:
|
168 |
+
spanish_dataset = load_dataset(DATASET_TO_UPDATE, split="train")
|
169 |
+
spanish_dataset = concatenate_datasets([spanish_dataset, local_spanish_dataset])
|
170 |
+
except Exception:
|
171 |
+
spanish_dataset = local_spanish_dataset
|
172 |
+
|
173 |
+
spanish_dataset.push_to_hub(DATASET_TO_UPDATE)
|
174 |
+
|
175 |
+
print(spanish_dataset)
|
176 |
+
|
177 |
+
# Augmenting the dataset
|
178 |
+
|
179 |
+
#Importan if exist element on DATASET_TO_UPDATE we must to update element
|
180 |
+
# in list, and review if the are repeted elements
|
181 |
+
|
182 |
+
#spanish_dataset.push_to_hub(DATASET_TO_UPDATE)
|
183 |
+
|