Edit model card

KBioXLM

The aligned corpus constructed using the knowledge-anchored method is combined with a multi task training strategy to continue training XLM-R, thus obtaining KBioXLM. It is the first multilingual biomedical pre-trained language model we know that has cross-lingual understanding capabilities in medical domain. It was introduced in the paper KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model and released in this repository.

Model description

KBioXLM model can be fintuned on downstream tasks. The downstream tasks here refer to biomedical cross-lingual understanding tasks, such as biomedical entity recognition, biomedical relationship extraction and biomedical text classification.

Usage

You can follow the prompts below to load our model parameters:

from transformers import RobertaModel
model=RobertaModel.from_pretrained('ngwlh/KBioXLM')

BibTeX entry and citation info

Coming soon.

Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.