5m4ck3r's picture
Update README.md
8f6b22a verified
metadata
tags:
  - text-classification
base_model: cross-encoder/nli-roberta-base
widget:
  - text: I love AutoTrain
license: mit
language:
  - en
metrics:
  - accuracy
pipeline_tag: zero-shot-classification
library_name: transformers

LogicSpine/address-base-text-classifier

Model Description

LogicSpine/address-base-text-classifier is a fine-tuned version of the cross-encoder/nli-roberta-base model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.

Model Usage

Installation

To use this model, you need to install the transformers library:

pip install transformers torch

Loading the Model

You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.

from transformers import pipeline

# Load the zero-shot classification pipeline with the custom model
classifier = pipeline("zero-shot-classification", 
                      model="LogicSpine/address-base-text-classifier")

# Define your input text and candidate labels
text = "Delhi, India"
candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]

# Perform classification
result = classifier(text, candidate_labels)

# Print the classification result
print(result)

Example Output

{'labels': ['Country',
            'District',
            'Academy',
            'College',
            'Department',
            'Laboratory'],
 'scores': [0.19237062335014343,
            0.1802321970462799,
            0.16583585739135742,
            0.16354037821292877,
            0.1526614874601364,
            0.14535939693450928],
 'sequence': 'Delhi, India'}

Validation Metrics

loss: 0.28241145610809326 f1_macro: 0.8093855588593053 f1_micro: 0.9515418502202643 f1_weighted: 0.949198754683482 precision_macro: 0.8090277777777778 precision_micro: 0.9515418502202643 precision_weighted: 0.9473201174743024 recall_macro: 0.8100845864661653 recall_micro: 0.9515418502202643 recall_weighted: 0.9515418502202643 accuracy: 0.9515418502202643