merge
Browse files
README.md
CHANGED
@@ -21,13 +21,12 @@ We open source this fine-tuned BERT model to identify critical aspects within us
|
|
21 |
|
22 |
We have used the [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) as the base model and fine-tuned it on a manually annotated dataset of webstore reviews.
|
23 |
|
24 |
-
Further details can be found in our AsiaCCS paper - [From User Insights to Actionable Metrics: A User-Focused Evaluation of Privacy-Preserving Browser Extensions](https://doi.org/10.1145/3634737.3657028).
|
25 |
|
26 |
-
|
27 |
-
We haven't tested its accuracy on user reviews from other categories but are open to discuss the possibility of extrapolating it to other product categories. Feel free to raise issues in the repo or contact the author directly.
|
28 |
|
29 |
## Intended uses & limitations
|
30 |
-
The model has been released
|
31 |
|
32 |
## Evaluation data
|
33 |
|
@@ -38,7 +37,7 @@ It achieves the following results on the evaluation set:
|
|
38 |
|
39 |
## Training procedure
|
40 |
|
41 |
-
The training dataset comprised
|
42 |
|
43 |
- learning_rate: 2e-05
|
44 |
- train_batch_size: 16
|
|
|
21 |
|
22 |
We have used the [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) as the base model and fine-tuned it on a manually annotated dataset of webstore reviews.
|
23 |
|
24 |
+
Further details can be found in our AsiaCCS paper - [`From User Insights to Actionable Metrics: A User-Focused Evaluation of Privacy-Preserving Browser Extensions`](https://doi.org/10.1145/3634737.3657028).
|
25 |
|
26 |
+
**Note:** We haven't tested its accuracy on user reviews from other categories but are open to discussing the possibility of extrapolating it to other product categories. Feel free to raise issues in the repo or contact the author directly.
|
|
|
27 |
|
28 |
## Intended uses & limitations
|
29 |
+
The model has been released for free use. It has not been trained on any private user data. Please cite the above paper in our works.
|
30 |
|
31 |
## Evaluation data
|
32 |
|
|
|
37 |
|
38 |
## Training procedure
|
39 |
|
40 |
+
The training dataset comprised of 620 reviews and the test dataset had 150 reviews. The following hyperparameters were used during training:
|
41 |
|
42 |
- learning_rate: 2e-05
|
43 |
- train_batch_size: 16
|