tags:
- text-classification
base_model: cross-encoder/nli-roberta-base
widget:
- text: I love AutoTrain
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: zero-shot-classification
library_name: transformers
LogicSpine/address-large-text-classifier
Model Description
LogicSpine/address-large-text-classifier
is a fine-tuned version of the cross-encoder/nli-roberta-base
model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.
Model Usage
Installation
To use this model, you need to install the transformers
library:
pip install transformers torch
Loading the Model
You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.
from transformers import pipeline
# Load the zero-shot classification pipeline with the custom model
classifier = pipeline("zero-shot-classification",
model="LogicSpine/address-large-text-classifier")
# Define your input text and candidate labels
text = "Delhi, India"
candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]
# Perform classification
result = classifier(text, candidate_labels)
# Print the classification result
print(result)
Example Output
{'labels': ['Country',
'District',
'Academy',
'College',
'Department',
'Laboratory'],
'scores': [0.19237062335014343,
0.1802321970462799,
0.16583585739135742,
0.16354037821292877,
0.1526614874601364,
0.14535939693450928],
'sequence': 'Delhi, India'}
Validation Metrics
loss: 1.3794080018997192 f1_macro: 0.21842933805832918 f1_micro: 0.4551574223406493 f1_weighted: 0.306703002026862 precision_macro: 0.19546905037281545 precision_micro: 0.4551574223406493 precision_weighted: 0.2510467302490216 recall_macro: 0.2811753463927377 recall_micro: 0.4551574223406493 recall_weighted: 0.4551574223406493 accuracy: 0.4551574223406493
Colab Notebook
Checkout this example of google Colab