metadata
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: I love AutoTrain 🤗
datasets:
- onevholy/autotrain-data-bert-base-cased-correct-test4format
co2_eq_emissions:
emissions: 0.2755241883081992
Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 50449120549
- CO2 Emissions (in grams): 0.2755
Validation Metrics
- Loss: 0.014
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- F1: 1.000
Usage
You can use cURL to access this model:
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/onevholy/autotrain-bert-base-cased-correct-test4format-50449120549
Or Python API:
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("onevholy/autotrain-bert-base-cased-correct-test4format-50449120549", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("onevholy/autotrain-bert-base-cased-correct-test4format-50449120549", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)