Update README.md
Browse files
README.md
CHANGED
@@ -9,9 +9,8 @@ tags: [transformers, T5, question-answering]
|
|
9 |
|
10 |
This model is a fine-tuned version of the T5-small model specifically tailored for question answering tasks in the biomedical domain. It has been trained to understand and generate responses based on biomedical literature, making it particularly useful for researchers and practitioners in the field.
|
11 |
|
12 |
-
##
|
13 |
|
14 |
-
You can use the following Python code:
|
15 |
|
16 |
```python
|
17 |
pip install transformers
|
@@ -19,16 +18,13 @@ pip install transformers
|
|
19 |
from transformers import T5ForConditionalGeneration, T5Tokenizer
|
20 |
import torch
|
21 |
|
22 |
-
# Load model and tokenizer
|
23 |
tokenizer = T5Tokenizer.from_pretrained("starman76/t5_500")
|
24 |
model = T5ForConditionalGeneration.from_pretrained("starman76/t5_500")
|
25 |
|
26 |
-
# Prepare the question and context
|
27 |
context = "Aspirin is a medication used to reduce pain, fever, or inflammation."
|
28 |
question = "What is Aspirin used for?"
|
29 |
inputs = tokenizer(question, context, add_special_tokens=True, return_tensors="pt", max_length=512, truncation=True)
|
30 |
|
31 |
-
# Generate an answer
|
32 |
with torch.no_grad():
|
33 |
outputs = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], max_length=50)
|
34 |
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
|
|
9 |
|
10 |
This model is a fine-tuned version of the T5-small model specifically tailored for question answering tasks in the biomedical domain. It has been trained to understand and generate responses based on biomedical literature, making it particularly useful for researchers and practitioners in the field.
|
11 |
|
12 |
+
## Getting started with the model
|
13 |
|
|
|
14 |
|
15 |
```python
|
16 |
pip install transformers
|
|
|
18 |
from transformers import T5ForConditionalGeneration, T5Tokenizer
|
19 |
import torch
|
20 |
|
|
|
21 |
tokenizer = T5Tokenizer.from_pretrained("starman76/t5_500")
|
22 |
model = T5ForConditionalGeneration.from_pretrained("starman76/t5_500")
|
23 |
|
|
|
24 |
context = "Aspirin is a medication used to reduce pain, fever, or inflammation."
|
25 |
question = "What is Aspirin used for?"
|
26 |
inputs = tokenizer(question, context, add_special_tokens=True, return_tensors="pt", max_length=512, truncation=True)
|
27 |
|
|
|
28 |
with torch.no_grad():
|
29 |
outputs = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], max_length=50)
|
30 |
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
|