metadata
license: apache-2.0
tags:
- pretrained
- mistral
- chemistry
Model Card for Mistral-Chem-v1-1.6B (Mistral for chemistry)
The Mistral-Chem-v1-1.6B Large Language Model (LLM) is a pretrained generative chemical molecule model with 1.6B parameters. It is derived from Mixtral-8x7B-v0.1 model, which was simplified for molecules: the number of layers and the hidden size were reduced. The model was pretrained using 10M molecule SMILES strings from the ZINC 15 database.
Model Architecture
Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
- Mixture of Experts
Load the model from huggingface:
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Chem-v1-1.6B", trust_remote_code=True)
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Chem-v1-1.6B", trust_remote_code=True)
Calculate the embedding of a DNA sequence
chem = "CCCCC[C@H](Br)CC"
inputs = tokenizer(chem, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]
# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256
Troubleshooting
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
Notice
Mistral-Chem-v1-1.6B is a pretrained base model for chemistry.
Contact
Raphaël Mourad. [email protected]