Update README.md
Browse files
README.md
CHANGED
@@ -11,31 +11,30 @@ datasets:
|
|
11 |
metrics:
|
12 |
- accuracy
|
13 |
---
|
|
|
14 |
|
15 |
-
# Model Card for ReactionT5-product-prediction
|
16 |
|
17 |
-
This is a ReactionT5 pre-trained to predict the products of reactions. You can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5).
|
18 |
|
19 |
-
## Model Details
|
20 |
|
21 |
-
|
22 |
|
|
|
23 |
|
24 |
### Model Sources
|
25 |
|
26 |
<!-- Provide the basic links for the model. -->
|
27 |
|
28 |
- **Repository:** https://github.com/sagawatatsuya/ReactionT5
|
29 |
-
- **Paper:**
|
30 |
-
- **Demo:** https://huggingface.co/spaces/sagawa/predictproduct-t5
|
31 |
|
32 |
## Uses
|
33 |
|
34 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
35 |
|
36 |
## How to Get Started with the Model
|
37 |
|
38 |
-
|
39 |
|
40 |
```python
|
41 |
from transformers import AutoTokenizer, T5ForConditionalGeneration
|
@@ -54,7 +53,7 @@ output # 'O=S(=O)([O-])[O-].O=S(=O)([O-])[O-].O=S(=O)([O-])[O-].[Cr+3].[Cr+3]'
|
|
54 |
|
55 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
56 |
We used Open Reaction Database (ORD) dataset for model training.
|
57 |
-
|
58 |
|
59 |
```python
|
60 |
python train.py \
|
@@ -79,15 +78,17 @@ python train.py \
|
|
79 |
|
80 |
Performance comparison of Compound T5, ReactionT5, and other models in product prediction. The values enclosed in ‘<>’ in the table represent the scores of the model that was fine-tuned on 200 reactions from the USPTO dataset. The score enclosed in ‘()’ is the one reported in the original paper.
|
81 |
|
82 |
-
## Citation
|
83 |
|
84 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
{
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
{
|
|
|
|
|
|
11 |
metrics:
|
12 |
- accuracy
|
13 |
---
|
14 |
+
# ⚠️This is an old version of [ReactionT5-forward-v2](https://huggingface.co/sagawa/ReactionT5-forward-v2). Prediction accuracy is worse.⚠️
|
15 |
|
|
|
16 |
|
|
|
17 |
|
|
|
18 |
|
19 |
+
# Model Card for ReactionT5-forward-v1
|
20 |
|
21 |
+
This is a ReactionT5 pre-trained to predict the products of reactions.
|
22 |
|
23 |
### Model Sources
|
24 |
|
25 |
<!-- Provide the basic links for the model. -->
|
26 |
|
27 |
- **Repository:** https://github.com/sagawatatsuya/ReactionT5
|
28 |
+
- **Paper:** https://arxiv.org/abs/2311.06708
|
|
|
29 |
|
30 |
## Uses
|
31 |
|
32 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
33 |
+
You can use this model for forward reaction prediction or fine-tune this model with your dataset.
|
34 |
|
35 |
## How to Get Started with the Model
|
36 |
|
37 |
+
Use the code below to get started with the model.
|
38 |
|
39 |
```python
|
40 |
from transformers import AutoTokenizer, T5ForConditionalGeneration
|
|
|
53 |
|
54 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
55 |
We used Open Reaction Database (ORD) dataset for model training.
|
56 |
+
The command used for training is the following. For more information, please refer to the paper and GitHub repository.
|
57 |
|
58 |
```python
|
59 |
python train.py \
|
|
|
78 |
|
79 |
Performance comparison of Compound T5, ReactionT5, and other models in product prediction. The values enclosed in ‘<>’ in the table represent the scores of the model that was fine-tuned on 200 reactions from the USPTO dataset. The score enclosed in ‘()’ is the one reported in the original paper.
|
80 |
|
81 |
+
## Citation
|
82 |
|
83 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
84 |
+
arxiv link: https://arxiv.org/abs/2311.06708
|
85 |
+
```
|
86 |
+
@misc{sagawa2023reactiont5,
|
87 |
+
title={ReactionT5: a large-scale pre-trained model towards application of limited reaction data},
|
88 |
+
author={Tatsuya Sagawa and Ryosuke Kojima},
|
89 |
+
year={2023},
|
90 |
+
eprint={2311.06708},
|
91 |
+
archivePrefix={arXiv},
|
92 |
+
primaryClass={physics.chem-ph}
|
93 |
+
}
|
94 |
+
```
|