gLM2 LoRA adapter for TATA promoter recognition
This model demonstrates the use of gLM2_150M embeddings for downstream classification. The model is fine-tuned using LoRA and obtains an F1 score of 98.11% on the TATA promoter task from the Nucleotide Transformer benchmarks.
How to Get Started with the Model
Use the code below to use the model for inference:
from peft import PeftModel
from transformers import AutoConfig, AutoModelForSequenceClassification, AutoModel
glm2 = "tattabio/gLM2_150M"
adapter = "alejandralopezsosa/gLM2_150M-promoter_tata-lora"
load_kwargs = {
'trust_remote_code': True,
'torch_dtype': torch.bfloat16,
}
config = AutoConfig.from_pretrained(adapter, **load_kwargs)
base_model = AutoModelForSequenceClassification.from_config(config, **load_kwargs)
base_model.glm2 = AutoModel.from_pretrained("tattabio/gLM2_150M", **load_kwargs)
model = PeftModel.from_pretrained(base_model, adapter)
- Downloads last month
- 2
Model tree for alejandralopezsosa/gLM2_150M-promoter_tata-lora
Base model
tattabio/gLM2_150MDataset used to train alejandralopezsosa/gLM2_150M-promoter_tata-lora
Evaluation results
- f1 on nucleotide_transformer_downstream_tasks_revisedtest set self-reported0.981