AIMH
/

shax's picture
Update README.md
40e339c
|
raw
history blame
1.51 kB
---
license: cc-by-nc-4.0
---
[MentalBERT](https://arxiv.org/abs/2110.15621) is a model initialized with RoBERTa-large (`uncased_L-24_H-1024_A-16`) and trained with mental health-related posts collected from Reddit.
We follow the standard pretraining protocols of BERT and RoBERTa with [Huggingface’s Transformers library](https://github.com/huggingface/transformers).
We use four Nvidia Tesla v100 GPUs to train the two language models. We set the batch size to 8 per GPU, evaluate every 1,000 steps, and train for 312,000 iterations.
## Usage
Load the model via [Huggingface’s Transformers library](https://github.com/huggingface/transformers):
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("AIMH/mental-roberta-large")
model = AutoModel.from_pretrained("AIMH/mental-roberta-large")
```
To minimize the influence of worrying mask predictions, this model is gated. To download a gated model, you’ll need to be authenticated.
Know more about [gated models](https://huggingface.co/docs/hub/models-gated).
## Paper
[MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare](https://arxiv.org/abs/2110.15621).
```
@inproceedings{ji2022mentalbert,
title = {{MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare}},
author = {Shaoxiong Ji and Tianlin Zhang and Luna Ansari and Jie Fu and Prayag Tiwari and Erik Cambria},
year = {2022},
booktitle = {Proceedings of LREC}
}
```