|
--- |
|
library_name: transformers |
|
tags: |
|
- protein |
|
license: bsd-3-clause |
|
--- |
|
|
|
ProGen2-small finetuned on 7 protein families: |
|
- PF00002 - GPCRs |
|
- PF00042 - Globins |
|
- PF00125 - Core histones |
|
- PF00127 - Copper binding proteins |
|
- PF00257 - Dehydrins |
|
- PF00262 - Calreticulins |
|
- PF03668 - P-loop ATPase |
|
|
|
Bidirectional model trained on both N -> C and C -> N directions of protein sequences, specified by tokens "1" and "2" respectively. |
|
|
|
See my [github repo](https://github.com/hugohrban/ProGen2-finetuning/tree/main) for more information. |
|
|
|
Example usage: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM |
|
from tokenizers import Tokenizer |
|
import torch |
|
import torch.nn.functional as F |
|
|
|
# load model and tokenizer |
|
model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-small-mix7-bidi", trust_remote_code=True) |
|
tokenizer = Tokenizer.from_pretrained("hugohrban/progen2-small-mix7-bidi") |
|
tokenizer.no_padding() |
|
|
|
# prepare input |
|
prompt = "<|pf00125|>2FDDDVSAVKSTGVSK" |
|
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device) |
|
|
|
# forward pass |
|
logits = model(input_ids).logits |
|
|
|
# print output probabilities |
|
next_token_logits = logits[-1, :] |
|
next_token_probs = F.softmax(next_token_logits, dim=-1) |
|
for i in range(tokenizer.get_vocab_size(with_added_tokens=False)): |
|
print(f"{tokenizer.id_to_token(i)}: {100 * next_token_probs[i].item():.2f} %") |
|
``` |
|
|