--- language: de license: mit datasets: - wikipedia - OPUS - OpenLegalData --- # German ELECTRA base Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model. Our evaluation suggests that this model is somewhat undertrained. For best performance from a base sized model, we recommend deepset/gbert-base ## Overview **Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf) **Architecture:** ELECTRA base (discriminator) **Language:** German ## Performance ``` GermEval18 Coarse: 76.02 GermEval18 Fine: 42.22 GermEval14: 86.02 ``` See also: deepset/gbert-base deepset/gbert-large deepset/gelectra-base deepset/gelectra-large deepset/gelectra-base-generator deepset/gelectra-large-generator ## Authors Branden Chan: `branden.chan [at] deepset.ai` Stefan Schweter: `stefan [at] schweter.eu` Timo Möller: `timo.moeller [at] deepset.ai` ## About us
For more info on Haystack, visit our GitHub repo and Documentation. We also have a Discord community open to everyone!
[Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)