Mistral-Chem-v1-15M / README.md
RaphaelMourad's picture
Update README.md
2a4cd8e verified
---
license: apache-2.0
tags:
- pretrained
- mistral
- chemistry
---
# Model Card for Mistral-Chem-v1-15M (Mistral for chemistry)
The Mistral-Chem-v1-15M Large Language Model (LLM) is a pretrained generative chemical molecule model with 1.9M parameters x 8 experts = 15.2M parameters.
It is derived from Mixtral-8x7B-v0.1 model, which was simplified for molecules: the number of layers and the hidden size were reduced.
The model was pretrained using 10M molecule SMILES strings from the ZINC 15 database.
## Model Architecture
Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
- Mixture of Experts
## Load the model from huggingface:
```
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Chem-v1-15M", trust_remote_code=True)
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Chem-v1-15M", trust_remote_code=True)
```
## Calculate the embedding of a DNA sequence
```
chem = "CCCCC[C@H](Br)CC"
inputs = tokenizer(chem, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]
# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256
```
## Troubleshooting
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral-Chem-v1-15M is a pretrained base model for chemistry.
## Contact
Raphaël Mourad. [email protected]