Zero-Shot Classification
Transformers
PyTorch
Safetensors
bert
text-classification
Inference Endpoints
saattrupdan commited on
Commit
e016ed8
1 Parent(s): 35f28f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -38,6 +38,8 @@ We have released three models for Scandinavian NLI, of different sizes:
38
  - alexandrainst/scandi-nli-base (this)
39
  - [alexandrainst/scandi-nli-small](https://huggingface.co/alexandrainst/scandi-nli-small)
40
 
 
 
41
  The performance and model size of each of them can be found in the Performance section below.
42
 
43
 
@@ -146,6 +148,8 @@ The training split of DanFEVER is generated using [this gist](https://gist.githu
146
 
147
  The three languages are sampled equally during training, and they're validated on validation splits of [DanFEVER](https://aclanthology.org/2021.nodalida-main.pdf#page=439) and machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) for Swedish and Norwegian Bokmål, sampled equally.
148
 
 
 
149
  ### Training hyperparameters
150
 
151
  The following hyperparameters were used during training:
 
38
  - alexandrainst/scandi-nli-base (this)
39
  - [alexandrainst/scandi-nli-small](https://huggingface.co/alexandrainst/scandi-nli-small)
40
 
41
+ A demo of the large model can be found in [this Hugging Face Space](https://huggingface.co/spaces/alexandrainst/zero-shot-classification) - check it out!
42
+
43
  The performance and model size of each of them can be found in the Performance section below.
44
 
45
 
 
148
 
149
  The three languages are sampled equally during training, and they're validated on validation splits of [DanFEVER](https://aclanthology.org/2021.nodalida-main.pdf#page=439) and machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) for Swedish and Norwegian Bokmål, sampled equally.
150
 
151
+ The full training logs can be found in [this Weights and Biases report](https://wandb.ai/saattrupdan/huggingface/reports/ScandiNLI--VmlldzozMDQyOTk1?accessToken=r9crgxqvvigy2hatdjeobzwipz7f3id5vqg8ooksljhfw6wl0hv1b05asypsfj9v).
152
+
153
  ### Training hyperparameters
154
 
155
  The following hyperparameters were used during training: