File size: 1,565 Bytes
e81cf6c
c3c2101
eb36f93
 
 
 
 
c3c2101
e81cf6c
eb36f93
e81cf6c
 
 
 
c3c2101
eb36f93
 
 
 
c3c2101
 
 
eb36f93
 
c3c2101
eb36f93
c3c2101
 
 
 
eb36f93
c3c2101
eb36f93
 
c3c2101
 
 
 
 
 
 
 
 
 
eb36f93
 
c3c2101
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
title: FBeta_Score
datasets:
-  
tags:
- evaluate
- metric
description: "Calculate FBeta_Score"
sdk: gradio
sdk_version: 3.0.2
app_file: app.py
pinned: false
---

# Metric Card for FBeta_Score

***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*

## Metric Description
*Compute the F-beta score.
The F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its worst value at 0.
The beta parameter determines the weight of recall in the combined score. beta < 1 lends more weight to precision, while beta > 1 favors recall (beta -> 0 considers only precision, beta -> +inf only recall).*

## How to Use
``` python

f_beta = evaluate.load("leslyarun/f_beta")
results = f_beta.compute(references=[0, 1], predictions=[0, 1], beta=0.5)
print(results)
{'f_beta_score': 1.0}   

```

## Citation
@article{scikit-learn,
    title={Scikit-learn: Machine Learning in {P}ython},
    author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
           and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
           and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
           Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
    journal={Journal of Machine Learning Research},
    volume={12},
    pages={2825--2830},
    year={2011}

## Further References
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html#sklearn.metrics.fbeta_score