File size: 3,030 Bytes
b5c8dfa
 
 
 
 
82374aa
b5c8dfa
 
 
1a78fae
b5c8dfa
 
281e737
b5c8dfa
 
 
 
e0d4260
b5c8dfa
 
20c2de0
b5c8dfa
 
20c2de0
b5c8dfa
 
 
 
 
 
20c2de0
b5c8dfa
 
 
20c2de0
b5c8dfa
 
 
 
20c2de0
 
 
 
 
b5c8dfa
 
 
 
 
b41c6db
bf23b5a
 
 
 
 
52df53e
b41c6db
 
 
 
b5c8dfa
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
tags:
- molecular language model
- SELFIES
- molecule optimization
inference: false
---

# MolGen-large-opt
MolGen-large-opt was introduced in the paper ["Domain-Agnostic Molecular Generation with Self-feedback"](https://arxiv.org/pdf/2301.11259.pdf) and first released in [this repository](https://github.com/zjunlp/MolGen). 

## Model description
MolGen-large-opt is the fine-tuned version of [MolGen-large](https://huggingface.co/zjunlp/MolGen-large). MolGen-large is the first pre-trained model that only produces chemically valid molecules. 
With a training corpus of over 100 million molecules in SELFIES representation, MolGen-large learns the intrinsic structural patterns of molecules by mapping corrupted SELFIES to their original forms.
Specifically, MolGen-large employs a bidirectional Transformer as its encoder and an autoregressive Transformer as its decoder.
Through its carefully designed multi-task molecular prefix tuning (MPT), MolGen-large-opt can generate molecules with desired properties, making it a valuable tool for molecular optimization.

![image.png](./molgen.png)

## Intended uses
You can use the fine-tuned model for molecule optimization for downstream tasks. See the [repository](https://github.com/zjunlp/MolGen) to look for fine-tune details on a task that interests you.

### How to use
Molecule optimization example:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/MolGen-large-opt")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("zjunlp/MolGen-large-opt")

>>> sf_input = tokenizer("[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]", return_tensors="pt")
>>> # beam search
>>> molecules = model.generate(input_ids=sf_input["input_ids"],
                              attention_mask=sf_input["attention_mask"],
                              max_length=35,
                              min_length=5,
                              num_return_sequences=5,
                              num_beams=5)
>>> sf_output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).replace(" ","") for g in molecules]
['[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]',
'[N][#C][C][C][C@@H1][C][C][C][C][C][C][C][C][C][C][C][C][C][C][Ring1][N][=O]']
```


### BibTeX entry and citation info
```bibtex
@inproceedings{fang2023domain,
  author       = {Yin Fang and
                  Ningyu Zhang and
                  Zhuo Chen and
                  Xiaohui Fan and
                  Huajun Chen},
  title        = {Domain-Agnostic Molecular Generation with Chemical Feedback},
  booktitle    = {{ICLR}},
  publisher    = {OpenReview.net},
  year         = {2024},
  url          = {https://openreview.net/pdf?id=9rPyHyjfwP}
}
```