File size: 1,146 Bytes
9e09313
a28b23f
5e2ca59
 
 
9e09313
ceb5e6a
a28b23f
 
9e09313
5e2ca59
 
 
 
cb86ce1
5e2ca59
 
 
 
 
 
 
 
 
5ed7604
 
5e2ca59
 
 
 
a28b23f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
language:
- multilingual
- en
- zh
license: apache-2.0
pipeline_tag: fill-mask
tags:
- medical
---

# KBioXLM

The aligned corpus constructed using the knowledge-anchored method is combined with a multi task training strategy to continue training XLM-R, thus obtaining KBioXLM. It is the first multilingual biomedical pre-trained language model we know that has cross-lingual understanding capabilities in medical domain. It was introduced in the paper [KBioXLM: A Knowledge-anchored Biomedical
Multilingual Pretrained Language Model](http://arxiv.org/abs/2311.11564) and released in [this repository](https://github.com/ngwlh-gl/KBioXLM/tree/main).

## Model description
KBioXLM model can be fintuned on downstream tasks. The downstream tasks here refer to biomedical cross-lingual understanding tasks, such as biomedical entity recognition, biomedical relationship extraction and biomedical text classification.

## Usage

You can follow the prompts below to load our model parameters:

```python
from transformers import RobertaModel
model=RobertaModel.from_pretrained('ngwlh/KBioXLM')
```

### BibTeX entry and citation info

Coming soon.