Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ emoji: π
|
|
4 |
colorFrom: purple
|
5 |
colorTo: pink
|
6 |
sdk: gradio
|
7 |
-
sdk_version: 4.
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
@@ -12,11 +12,13 @@ tags:
|
|
12 |
- evaluate
|
13 |
- metric
|
14 |
description: >-
|
15 |
-
Perplexity metric implemented by d-Matrix.
|
16 |
-
|
17 |
-
|
18 |
-
Note that this metric is intended for Causual Language
|
19 |
-
|
|
|
|
|
20 |
---
|
21 |
|
22 |
# Metric Card for Perplexity
|
|
|
4 |
colorFrom: purple
|
5 |
colorTo: pink
|
6 |
sdk: gradio
|
7 |
+
sdk_version: 4.41.0
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
|
|
12 |
- evaluate
|
13 |
- metric
|
14 |
description: >-
|
15 |
+
Perplexity metric implemented by d-Matrix. Perplexity (PPL) is one of the most
|
16 |
+
common metrics for evaluating language models. It is defined as the
|
17 |
+
exponentiated average negative log-likelihood of a sequence, calculated with
|
18 |
+
exponent base `e`. Note that this metric is intended for Causual Language
|
19 |
+
Models, the perplexity calculation is only correct if model uses Cross Entropy
|
20 |
+
Loss. For more information, see
|
21 |
+
https://huggingface.co/docs/transformers/perplexity
|
22 |
---
|
23 |
|
24 |
# Metric Card for Perplexity
|