Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,165 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
+
tags:
|
4 |
+
- prokbert
|
5 |
+
- bioinformatics
|
6 |
+
- genomics
|
7 |
+
- sequence embedding
|
8 |
+
- genomic language models
|
9 |
+
- nucleotide
|
10 |
+
- dna-sequence
|
11 |
+
- promoter
|
12 |
+
- microbiom
|
13 |
---
|
14 |
+
## ProkBERT-mini-c-promoter Model
|
15 |
+
|
16 |
+
This finetuned model is specifically designed for promoter identification and is based on the [ProkBERT-mini-c model](https://huggingface.co/neuralbioinfo/prokbert-mini-c).
|
17 |
+
|
18 |
+
For more details, refer to the [promoter dataset description](https://huggingface.co/datasets/neuralbioinfo/bacterial_promoters) used for training and evaluating this model.
|
19 |
+
|
20 |
+
### Example Usage
|
21 |
+
|
22 |
+
For practical examples on how to use this model, see the following Jupyter notebooks:
|
23 |
+
|
24 |
+
- [Training Notebook](https://colab.research.google.com/github/nbrg-ppcu/prokbert/blob/main/examples/Finetuning.ipynb): A guide to fine-tuning the ProkBERT-mini model for promoter identification tasks.
|
25 |
+
- [Evaluation Notebook](https://colab.research.google.com/github/nbrg-ppcu/prokbert/blob/main/examples/Inference.ipynb): Demonstrates how to evaluate the finetuned ProkBERT-mini-promoter model on test datasets.
|
26 |
+
|
27 |
+
### Model Application
|
28 |
+
|
29 |
+
The model was trained for binary classification to distinguish between promoter and non-promoter sequences. The length and composition of the promoter sequences were standardized to ensure compatibility with alternative methods and to facilitate direct comparison of model performance.
|
30 |
+
|
31 |
+
|
32 |
+
|
33 |
+
## Simple Usage Example
|
34 |
+
|
35 |
+
The following example demonstrates how to use the ProkBERT-mini-c-promoter model for processing a DNA sequence:
|
36 |
+
|
37 |
+
```python
|
38 |
+
from prokbert.prokbert_tokenizer import ProkBERTTokenizer
|
39 |
+
from prokbert.models import BertForBinaryClassificationWithPooling
|
40 |
+
finetuned_model = "neuralbioinfo/prokbert-mini-promoter"
|
41 |
+
kmer = 1
|
42 |
+
shift= 1
|
43 |
+
|
44 |
+
tok_params = {'kmer' : kmer,
|
45 |
+
'shift' : shift}
|
46 |
+
tokenizer = ProkBERTTokenizer(tokenization_params=tok_params)
|
47 |
+
model = BertForBinaryClassificationWithPooling.from_pretrained(finetuned_model)
|
48 |
+
sequence = 'TAGCGCATAATGATTTCCTTATAAGCGATCGCTCTGAAAGCGTTCTACGATAATAATGATATCCTTTCAATAATAGCGTAT'
|
49 |
+
inputs = tokenizer(sequence, return_tensors="pt")
|
50 |
+
# Ensure that inputs have a batch dimension
|
51 |
+
inputs = {key: value.unsqueeze(0) for key, value in inputs.items()}
|
52 |
+
# Generate outputs from the model
|
53 |
+
outputs = model(**inputs)
|
54 |
+
print(outputs)
|
55 |
+
|
56 |
+
```
|
57 |
+
|
58 |
+
### Model Details
|
59 |
+
|
60 |
+
**Developed by:** Neural Bioinformatics Research Group
|
61 |
+
|
62 |
+
**Architecture:**
|
63 |
+
|
64 |
+
Traditionally, models like ...SequenceClassification classify sequences based on the hidden representation of the [CLS] or starting token. However, in our approach, we utilize the base model enhanced with a pooling layer that integrates information across all nucleotides in the sequence.
|
65 |
+
The input is expected to be 80bp long, same as in the dataset.
|
66 |
+
|
67 |
+
**Tokenizer:** The model uses a 1-mer tokenizer with a shift of 1 (k1s1).
|
68 |
+
|
69 |
+
**Parameters:**
|
70 |
+
|
71 |
+
| Parameter | Description |
|
72 |
+
|----------------------|--------------------------------------|
|
73 |
+
| Model Size | 24.9 million parameters |
|
74 |
+
| Max. Context Size | 1022 bp |
|
75 |
+
| Training Data | 206.65 billion nucleotides |
|
76 |
+
| Layers | 6 |
|
77 |
+
| Attention Heads | 6 |
|
78 |
+
|
79 |
+
### Intended Use
|
80 |
+
|
81 |
+
**Intended Use Cases:** As with all models in the bioinformatics domain, ProkBERT-mini-c-promoter should be used responsibly. Testing and evaluation have been conducted within specific genomic contexts, and the model's outputs in other scenarios are not guaranteed. Users should exercise caution and perform additional testing as necessary for their specific use cases.
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
### Installation of ProkBERT (if needed)
|
86 |
+
|
87 |
+
For setting up ProkBERT in your environment, you can install it using the following command (if not already installed):
|
88 |
+
|
89 |
+
```python
|
90 |
+
try:
|
91 |
+
import prokbert
|
92 |
+
print("ProkBERT is already installed.")
|
93 |
+
except ImportError:
|
94 |
+
!pip install prokbert
|
95 |
+
print("Installed ProkBERT.")
|
96 |
+
```
|
97 |
+
|
98 |
+
### Training Data and Process
|
99 |
+
|
100 |
+
**Overview:** The model was pretrained on a comprehensive dataset of genomic sequences to ensure broad coverage and robust learning.
|
101 |
+
|
102 |
+
|
103 |
+
*Masking performance of the ProkBERT family.*
|
104 |
+
|
105 |
+
### Evaluation of Promoter Prediction Tools on E-coli Sigma70 Dataset
|
106 |
+
|
107 |
+
| Tool | Accuracy | MCC | Sensitivity | Specificity |
|
108 |
+
|-----------------------|----------|-------|-------------|-------------|
|
109 |
+
| ProkBERT-mini | **0.87** | **0.74** | 0.90 | 0.85 |
|
110 |
+
| ProkBERT-mini-c | **0.87** | 0.73 | 0.88 | 0.85 |
|
111 |
+
| ProkBERT-mini-long | **0.87** | **0.74** | 0.89 | 0.85 |
|
112 |
+
| CNNProm | 0.72 | 0.50 | 0.95 | 0.51 |
|
113 |
+
| iPro70-FMWin | 0.76 | 0.53 | 0.84 | 0.69 |
|
114 |
+
| 70ProPred | 0.74 | 0.51 | 0.90 | 0.60 |
|
115 |
+
| iPromoter-2L | 0.64 | 0.37 | 0.94 | 0.37 |
|
116 |
+
| Multiply | 0.50 | 0.05 | 0.81 | 0.23 |
|
117 |
+
| bTSSfinder | 0.46 | -0.07 | 0.48 | 0.45 |
|
118 |
+
| BPROM | 0.56 | 0.10 | 0.20 | 0.87 |
|
119 |
+
| IBPP | 0.50 | -0.03 | 0.26 | 0.71 |
|
120 |
+
| Promotech | 0.71 | 0.43 | 0.49 | **0.90** |
|
121 |
+
| Sigma70Pred | 0.66 | 0.42 | 0.95 | 0.41 |
|
122 |
+
| iPromoter-BnCNN | 0.55 | 0.27 | **0.99** | 0.18 |
|
123 |
+
| MULTiPly | 0.54 | 0.19 | 0.92 | 0.22 |
|
124 |
+
|
125 |
+
*The ProkBERT family models exhibit remarkably consistent performance across the metrics assessed. With respect to accuracy, all three tools achieve an impressive*
|
126 |
+
|
127 |
+
| Metric | ProkBERT-mini | ProkBERT-mini-c | ProkBERT-mini-long | Promotech | Sigma70Pred | iPromoter-BnCNN | MULTiPly |
|
128 |
+
|--------------|---------------|-----------------|--------------------|-----------|-------------|-----------------|----------|
|
129 |
+
| Accuracy | 0.81 | 0.79 | 0.81 | 0.61 | 0.62 | 0.61 | 0.58 |
|
130 |
+
| F1 | 0.81 | 0.78 | 0.81 | 0.43 | 0.58 | 0.65 | 0.58 |
|
131 |
+
| MCC | 0.63 | 0.57 | 0.62 | 0.29 | 0.24 | 0.21 | 0.16 |
|
132 |
+
| Sensitivity | 0.81 | 0.75 | 0.79 | 0.29 | 0.52 | 0.66 | 0.57 |
|
133 |
+
| Specificity | 0.82 | 0.82 | 0.83 | 0.93 | 0.71 | 0.55 | 0.59 |
|
134 |
+
|
135 |
+
*Promoter prediction performance metrics on a diverse test set. A comparative analysis of various promoter prediction tools, showcasing their performance across key metrics including accuracy, F1 score, MCC, sensitivity, and specificity.*
|
136 |
+
|
137 |
+
|
138 |
+
### Ethical Considerations and Limitations
|
139 |
+
|
140 |
+
As with all models in the bioinformatics domain, ProkBERT-mini-k6-s1 should be used responsibly. Testing and evaluation have been conducted within specific genomic contexts, and the model's outputs in other scenarios are not guaranteed. Users should exercise caution and perform additional testing as necessary for their specific use cases.
|
141 |
+
|
142 |
+
### Reporting Issues
|
143 |
+
|
144 |
+
Please report any issues with the model or its outputs to the Neural Bioinformatics Research Group through the following means:
|
145 |
+
|
146 |
+
- **Model issues:** [GitHub repository link](https://github.com/nbrg-ppcu/prokbert)
|
147 |
+
- **Feedback and inquiries:** [[email protected]](mailto:[email protected])
|
148 |
+
|
149 |
+
## Reference
|
150 |
+
If you use ProkBERT-mini in your research, please cite the following paper:
|
151 |
+
|
152 |
+
|
153 |
+
```
|
154 |
+
@ARTICLE{10.3389/fmicb.2023.1331233,
|
155 |
+
AUTHOR={Ligeti, Balázs and Szepesi-Nagy, István and Bodnár, Babett and Ligeti-Nagy, Noémi and Juhász, János},
|
156 |
+
TITLE={ProkBERT family: genomic language models for microbiome applications},
|
157 |
+
JOURNAL={Frontiers in Microbiology},
|
158 |
+
VOLUME={14},
|
159 |
+
YEAR={2024},
|
160 |
+
URL={https://www.frontiersin.org/articles/10.3389/fmicb.2023.1331233},
|
161 |
+
DOI={10.3389/fmicb.2023.1331233},
|
162 |
+
ISSN={1664-302X},
|
163 |
+
ABSTRACT={...}
|
164 |
+
}
|
165 |
+
```
|