Model Name: My BERT Text Classification Model
Model Overview
This model is a fine-tuned BERT model for the task of text classification, specifically trained on the MRPC (Microsoft Research Paraphrase Corpus) dataset. It predicts whether two sentences are paraphrases of each other.
Model Details
- Model Type: BERT (Bidirectional Encoder Representations from Transformers)
- Training Dataset: MRPC
- Number of Labels: 2 (Paraphrase, Not Paraphrase)
- Fine-tuning Epochs: 3
- Training Parameters:
- Learning rate: 2e-5
- Batch size: 16
Intended Use
This model is intended for research and educational purposes. It can be used to assess sentence similarity, paraphrasing tasks, and more.
How to Use
You can easily use this model with the transformers
library:
from transformers import pipeline
# Load the model
model_pipeline = pipeline("text-classification", model="your_org_name/my_model_repo")
# Example prediction
result = model_pipeline("This is an example sentence.")
print(result)