File size: 1,079 Bytes
737d523
 
 
 
 
 
b496201
 
 
 
694496f
 
b496201
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
datasets:
- allenai/s2orc
tags:
- medical
---

This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.

Notably, different from `chaoyi-wu/PMC_LLAMA_7B`, this model is further trained for 10 epochs.

The model was trained with the following hyperparameters:

* Epochs: **10** 
* Batch size: 128 
* Cutoff length: 512
* Learning rate: 2e-5

Each epoch we sample 512 tokens per paper for training.

The model can be loaded as follows:

```
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
sentence = 'Hello, doctor' 
batch = tokenizer(
            sentence,
            return_tensors="pt", 
            add_special_tokens=False
        )
with torch.no_grad():
    generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
    print('model predict: ',tokenizer.decode(generated[0]))
```