File size: 2,367 Bytes
43c2e50
 
ac3f11d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
625e6e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: unknown
datasets:
- anilguven/turkish_product_reviews_sentiment
language:
- tr
metrics:
- accuracy
- f1
- recall
- precision
tags:
- turkish
- product
- electra
- bert
- review
---

### Model Info

This model was developed/finetuned for product review task for Turkish Language. Model was finetuned via hepsiburada.com product review dataset. 

### Model Sources

<!-- Provide the basic links for the model. -->

- **Dataset:** https://huggingface.co/datasets/anilguven/turkish_product_reviews_sentiment
- **Paper:** https://ieeexplore.ieee.org/document/9559007
- **Demo-Coding [optional]:** https://github.com/anil1055/Turkish_Product_Review_Analysis_with_Language_Models
- **Finetuned from model [optional]:** https://huggingface.co/dbmdz/electra-base-turkish-cased-discriminator


## How to Get Started with the Model

from transformers import pipeline

pipe = pipeline("text-classification", model="anilguven/electra_tr_turkish_product_reviews")

or

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("anilguven/electra_tr_turkish_product_reviews")

model = AutoModelForSequenceClassification.from_pretrained("anilguven/electra_tr_turkish_product_reviews")

#### Preprocessing 

You must apply removing stopwords, stemming, or lemmatization process for Turkish.

### Results

Accuracy: %92.54

## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

@INPROCEEDINGS{9559007,
  author={Guven, Zekeriya Anil},
  booktitle={2021 6th International Conference on Computer Science and Engineering (UBMK)}, 
  title={The Effect of BERT, ELECTRA and ALBERT Language Models on Sentiment Analysis for Turkish Product Reviews}, 
  year={2021},
  volume={},
  number={},
  pages={629-632},
  keywords={Computer science;Sentiment analysis;Analytical models;Computational modeling;Bit error rate;Time factors;Random forests;Sentiment Analysis;Language Model;Product Review;Machine Learning;E-commerce},
  doi={10.1109/UBMK52708.2021.9559007}}


**APA:**

Guven, Z. A. (2021, September). The effect of bert, electra and albert language models on sentiment analysis for turkish product reviews. In 2021 6th International Conference on Computer Science and Engineering (UBMK) (pp. 629-632). IEEE.