Edit model card

bert-base-cased-conversational

Conversational BERT (English, cased, 12‑layer, 768‑hidden, 12‑heads, 110M parameters) was trained on the English part of Twitter, Reddit, DailyDialogues[1], OpenSubtitles[2], Debates[3], Blogs[4], Facebook News Comments. We used this training data to build the vocabulary of English subtokens and took English cased version of BERT‑base as an initialization for English Conversational BERT.

08.11.2021: upload model with MLM and NSP heads

[1]: Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. IJCNLP 2017.

[2]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)

[3]: Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil. Proceedings of NAACL, 2016.

[4]: J. Schler, M. Koppel, S. Argamon and J. Pennebaker (2006). Effects of Age and Gender on Blogging in Proceedings of 2006 AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs.

Downloads last month
1,445
Inference API
This model can be loaded on Inference API (serverless).

Space using DeepPavlov/bert-base-cased-conversational 1