Edit model card

license: cc-by-4.0 language: - es tags: - casimedicos - explainability - medical exams - medical question answering - extractive question answering - squad - multilinguality - LLMs - LLM pretty_name: mdeberta-expl-extraction-multi task_categories: - question-answering size_categories: - 1K<n<10K



mdeberta-v3-base finetuned for Explanatory Argument Extraction

We finetuned mdeberta-v3-base on a novel extractive task which consists of identifying the explanation of the correct answer written by medical doctors in medical exams.

The training data is based on Antidote CasiMedicos for EN,ES,FR,IT languages.

The data source consists of Resident Medical Intern or Médico Interno Residente (MIR) exams, originally created by CasiMedicos, a Spanish community of medical professionals who collaboratively, voluntarily, and free of charge, publishes written explanations about the possible answers included in the MIR exams. The aim is to generate a resource that helps future medical doctors to study towards the MIR examinations. The commented MIR exams, including the explanations, are published in the CasiMedicos Project MIR 2.0 website.

We have extracted, clean, structure and annotated the available data so that each document in casimedicos-squad includes the clinical case, the correct answer, the multiple-choice questions and the commented exam written by native Spanish medical doctors. The comments have been annotated with the span in the text that corresponds to the explanation of the correct answer (see example below).

casimedicos-squad splits
train 404
validation 56
test 119

Example

The example above shows a document in CasiMedicos containing the textual content, including Clinical Case (C), Question (Q), Possible Answers (P), and Explanation (E). Furthermore, for casimedicos-squad we annotated the span in the explanation (E) that corresponds to the correct answer (A).

The process of manually annotating the corpus consisted of specifying where the explanations of the correct answers begin and end. In order to obtain grammatically complete correct answer explanations, annotating full sentences or subordinate clauses was preferred over shorter spans.

Data Explanation

The dataset is structured as a list of documents ("paragraphs") where each of them include:

  • context: the explanation (E) in the document
  • qas: list of possible answers and questions. This element contains:
    • answers: an answer which corresponds to the explanation of the correct answer (A)
    • question: the clinical case (C) and question (Q)
    • id: unique identifier for the document

Citation

If you use this data please cite the following paper:

@misc{goenaga2023explanatory,
      title={Explanatory Argument Extraction of Correct Answers in Resident Medical Exams}, 
      author={Iakes Goenaga and Aitziber Atutxa and Koldo Gojenola and Maite Oronoz and Rodrigo Agerri},
      year={2023},
      eprint={2312.00567},
      archivePrefix={arXiv}
}

Contact: Iakes Goenaga and Rodrigo Agerri HiTZ Center - Ixa, University of the Basque Country UPV/EHU

Model Description

Uses

Direct Use

[More Information Needed]

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
3
Safetensors
Model size
559M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including HiTZ/xlm-roberta-large-expl-extraction-multi