rugpt3large_mailqa / README.md
16x16
Initial release
6dad1a1
|
raw
history blame
543 Bytes
---
language:
- ru
tags:
- PyTorch
- Transformers
---
# rugpt3large\_mailqa
Model was finetuned with sequence length 1024 for 516000 steps on a dataset of otvet.mail.ru questions and answers. The raw dataset can be found [here](https://www.kaggle.com/datasets/atleast6characterss/otvetmailru-full). Beware that the data contains a good portion of toxic language, so the answers can be unpredictable.
Jupyter notebook with an example of how to inference this model can be found in the [repository](https://github.com/NeuralPushkin/MailRu_Q-A)