LeonardPuettmann commited on
Commit
c95b3cd
1 Parent(s): 24cdc59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -13,6 +13,9 @@ widget:
13
 
14
  This destilbert model was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery. The articles consisted of the headlines plus abstract of the article.
15
  For the finetuning, a single NVidia K80 was used for about four hours.
 
 
 
16
  DistilBERT is a smaller, faster and lighter version of BERT. It was trained by distilling BERT base and has 40% less parameters than bert-base-uncased.
17
  It runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark.
18
  DistilBERT does not have token-type embeddings, pooler and retains only half of the layers from Google’s BERT.
 
13
 
14
  This destilbert model was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery. The articles consisted of the headlines plus abstract of the article.
15
  For the finetuning, a single NVidia K80 was used for about four hours.
16
+
17
+ Join our Discord if you have questions about this model: https://discord.gg/MdZyqSxKbe
18
+
19
  DistilBERT is a smaller, faster and lighter version of BERT. It was trained by distilling BERT base and has 40% less parameters than bert-base-uncased.
20
  It runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark.
21
  DistilBERT does not have token-type embeddings, pooler and retains only half of the layers from Google’s BERT.