YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
BERT-DK_laptop
is trained from 100MB laptop corpus under Electronics/Computers & Accessories/Laptops
.
BERT-PT_*
addtionally uses SQuAD 1.1.
Model Description
The original model is from BERT-base-uncased
trained from Wikipedia+BookCorpus.
Models are post-trained from Amazon Dataset and Yelp Dataset.
Instructions
Loading the post-trained weights are as simple as, e.g.,
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-PT_laptop")
model = AutoModel.from_pretrained("activebus/BERT-PT_laptop")
Evaluation Results
Check our NAACL paper
Citation
If you find this work useful, please cite as following.
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
- Downloads last month
- 335
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.