Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fearao/RoBERTa_based_on_eastmoney_guba_comments

Model description

This model is based on uer/roberta-base-finetuned-dianping-chinese, fine-tuned using comment data from Eastmoney stock bar, and I used the original tokenizer. Thanks a lot to the authors of the model for all the help they gave me.

How to use

You can use this model directly with a pipeline for text classification (take the case of RoBERTa_based_on_eastmoney_guba_comments):

>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
>>> model = AutoModelForSequenceClassification.from_pretrained('Fearao/RoBERTa_based_on_eastmoney_guba_comments')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> text_classification("又跳水了")
   [{'label': 'negative (stars 1, 2 and 3)', 'score': 0.9989427924156189}]

Training data

Eastmoney stock bar comments datasets are used Fearao/guba_eastmoney

Training procedure

Num examples = 7087
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 2658
Number of trainable parameters = 102269186

Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.