|
--- |
|
tags: |
|
- molecular language model |
|
- SELFIES |
|
- molecule generation |
|
--- |
|
# MolGen |
|
MolGen was introduced in the paper ["Molecular Language Model as Multi-task Generator"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen). It is a pre-trained molecular generative model built using the 100\% robust molecular language representation, SELFIES. |
|
|
|
## Model description |
|
MolGen is the first pre-trained model that only produces chemically valid molecules. |
|
With a training corpus of over 100 million molecules in SELFIES representation, MolGen learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms. |
|
Specifically, MolGen employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder. |
|
Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen can generate molecules with desired properties, making it a valuable tool for molecular optimization. |
|
|
|
## Intended uses |
|
You can use the raw model for molecular generation or fine-tune it to a downstream task. See the [repository](https://github.com/zjunlp/MolGen) to look for fine-tune details on a task that interests you. |
|
|
|
### How to use |
|
Molecule generation example: |
|
```python |
|
>>> from transformers import AutoTokenizer, BartForConditionalGeneration |
|
|
|
>>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen") |
|
>>> model = BartForConditionalGeneration.from_pretrained("zjunlp/MolGen") |
|
|
|
>>> sf_input = tokenizer("[C][=C][C][=C][C][=C][Ring1][=Branch1]", return_tensors="pt") |
|
>>> # beam search |
|
>>> molecules = model.generate(input_ids=sf_input["input_ids"], |
|
attention_mask=sf_input["attention_mask"], |
|
max_length=20, |
|
min_length=5, |
|
num_return_sequences=5, |
|
num_beams=5, |
|
past_prompt=None) |
|
>>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules] |
|
['[C][=C][C][=C][C][=C][Ring1][=Branch1]', '[C][=C][C][=C][C][=C][C][=C][Ring1][=Branch1]', '[C][=C][C][=C][C][=C][Ring1][=Branch1][C@H1][C][=C][C][=C][C][=C][Ring1][=Branch1]', '[C][=C][C][=C][C][=C][Ring1][=Branch1][C][=C][C][=C][C][=C][Ring1][=Branch1]', '[C][=C][C][=C][C][=C][Ring1][=Branch1][C@H1][=C][C][=C][Ring1][=Branch1]'] |
|
``` |
|
|
|
|
|
### BibTeX entry and citation info |
|
```bibtex |
|
@article{fang2023molecular, |
|
title={Molecular Language Model as Multi-task Generator}, |
|
author={Fang, Yin and Zhang, Ningyu and Chen, Zhuo and Fan, Xiaohui and Chen, Huajun}, |
|
journal={arXiv preprint arXiv:2301.11259}, |
|
year={2023} |
|
} |
|
``` |