File size: 2,375 Bytes
fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 fe2d129 a0278f9 2b22bc3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
tags:
- text-classification
base_model: cross-encoder/nli-roberta-base
widget:
- text: I love AutoTrain
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: zero-shot-classification
library_name: transformers
---
# LogicSpine/address-large-text-classifier
## Model Description
`LogicSpine/address-large-text-classifier` is a fine-tuned version of the `cross-encoder/nli-roberta-base` model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.
## Model Usage
### Installation
To use this model, you need to install the `transformers` library:
```bash
pip install transformers torch
```
### Loading the Model
You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.
```
from transformers import pipeline
# Load the zero-shot classification pipeline with the custom model
classifier = pipeline("zero-shot-classification",
model="LogicSpine/address-large-text-classifier")
# Define your input text and candidate labels
text = "Delhi, India"
candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]
# Perform classification
result = classifier(text, candidate_labels)
# Print the classification result
print(result)
```
## Example Output
```
{'labels': ['Country',
'District',
'Academy',
'College',
'Department',
'Laboratory'],
'scores': [0.19237062335014343,
0.1802321970462799,
0.16583585739135742,
0.16354037821292877,
0.1526614874601364,
0.14535939693450928],
'sequence': 'Delhi, India'}
```
## Validation Metrics
**loss:** 1.3794080018997192
**f1_macro:** 0.21842933805832918
**f1_micro:** 0.4551574223406493
**f1_weighted:** 0.306703002026862
**precision_macro:** 0.19546905037281545
**precision_micro:** 0.4551574223406493
**precision_weighted:** 0.2510467302490216
**recall_macro:** 0.2811753463927377
**recall_micro:** 0.4551574223406493
**recall_weighted:** 0.4551574223406493
**accuracy:** 0.4551574223406493
# Colab Notebook
Checkout [this](https://colab.research.google.com/drive/1-I9fm3FsfRaEoMsufLXHKmsxMPJSnpTc?usp=sharing) example of google Colab |