LeonardPuettmann commited on
Commit
24cdc59
1 Parent(s): 21faeae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,8 +14,8 @@ widget:
14
  This destilbert model was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery. The articles consisted of the headlines plus abstract of the article.
15
  For the finetuning, a single NVidia K80 was used for about four hours.
16
  DistilBERT is a smaller, faster and lighter version of BERT. It was trained by distilling BERT base and has 40% less parameters than bert-base-uncased.
17
- It runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark 1.
18
- DistilBERT does not have token-type embeddings, pooler and retains only half of the layers from Google’s BERT 2
19
  ## Features
20
 
21
  - The model can handle various text classification tasks, especially when it comes to stock and finance news sentiment classification.
 
14
  This destilbert model was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery. The articles consisted of the headlines plus abstract of the article.
15
  For the finetuning, a single NVidia K80 was used for about four hours.
16
  DistilBERT is a smaller, faster and lighter version of BERT. It was trained by distilling BERT base and has 40% less parameters than bert-base-uncased.
17
+ It runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark.
18
+ DistilBERT does not have token-type embeddings, pooler and retains only half of the layers from Google’s BERT.
19
  ## Features
20
 
21
  - The model can handle various text classification tasks, especially when it comes to stock and finance news sentiment classification.