reflect updates in readme
Browse files
README.md
CHANGED
@@ -19,33 +19,30 @@ pinned: false
|
|
19 |
## Metric Description
|
20 |
This metric is used for evaluating how good a generated log(file) is, given a reference.
|
21 |
|
22 |
-
The metric
|
23 |
-
|
24 |
-
1. It evaluates if the predicted log has the correct amount of timestamps, if timestamps are monotonically increasing and if the timestamps are consistent in their format.
|
25 |
-
2. For measuring the similarity in content (without timestamps), this metric uses sacrebleu.
|
26 |
|
27 |
## How to Use
|
28 |
The metric can be just by simply giving the predicted log and the reference log as string.
|
29 |
|
30 |
Example with timestamps that are of correct amount, consistent, monotonically increasing (-> timestamp score of 1.0):
|
31 |
```
|
32 |
-
>>> predictions = ["2024-01-12 11:23
|
33 |
-
>>> references = ["2024-02-14
|
34 |
logmetric = evaluate.load("svenwey/logmetric")
|
35 |
>>> results = logmetric.compute(predictions=predictions,
|
36 |
... references=references)
|
37 |
-
>>> print(results["
|
38 |
1.0
|
39 |
```
|
40 |
|
41 |
Example with timestamp missing from prediction:
|
42 |
```
|
43 |
-
>>> predictions = ["
|
44 |
-
>>> references = ["2024-
|
45 |
logmetric = evaluate.load("svenwey/logmetric")
|
46 |
>>> results = logmetric.compute(predictions=predictions,
|
47 |
... references=references)
|
48 |
-
>>> print(results["
|
49 |
0.0
|
50 |
```
|
51 |
|
|
|
19 |
## Metric Description
|
20 |
This metric is used for evaluating how good a generated log(file) is, given a reference.
|
21 |
|
22 |
+
The metric evaluates if the predicted log has the correct amount of timestamps, if timestamps are monotonically increasing and if the timestamps are consistent in their format.
|
|
|
|
|
|
|
23 |
|
24 |
## How to Use
|
25 |
The metric can be just by simply giving the predicted log and the reference log as string.
|
26 |
|
27 |
Example with timestamps that are of correct amount, consistent, monotonically increasing (-> timestamp score of 1.0):
|
28 |
```
|
29 |
+
>>> predictions = ["2024-01-12 11:23 It's over Anikin, I have the high ground \n 2024-01-12 11:24 You underestimate my power!"]
|
30 |
+
>>> references = ["2024-02-14 Hello there! \n 2024-02-14 General Kenobi! You're a bold one, aren't you?"]
|
31 |
logmetric = evaluate.load("svenwey/logmetric")
|
32 |
>>> results = logmetric.compute(predictions=predictions,
|
33 |
... references=references)
|
34 |
+
>>> print(results["score"])
|
35 |
1.0
|
36 |
```
|
37 |
|
38 |
Example with timestamp missing from prediction:
|
39 |
```
|
40 |
+
>>> predictions = ["You were my brother Anikin"]
|
41 |
+
>>> references = ["2024-01-12 You were my brother Anikin"]
|
42 |
logmetric = evaluate.load("svenwey/logmetric")
|
43 |
>>> results = logmetric.compute(predictions=predictions,
|
44 |
... references=references)
|
45 |
+
>>> print(results["score"])
|
46 |
0.0
|
47 |
```
|
48 |
|