Spaces:
Runtime error
Runtime error
Fix URLs
Browse files- .github/workflows/sync_with_spaces.yml +1 -1
- README.md +26 -10
- app.py +2 -2
.github/workflows/sync_with_spaces.yml
CHANGED
@@ -17,4 +17,4 @@ jobs:
|
|
17 |
env:
|
18 |
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
19 |
run: |
|
20 |
-
git push https://lewtun:[email protected]/spaces/autoevaluate/
|
|
|
17 |
env:
|
18 |
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
19 |
run: |
|
20 |
+
git push https://lewtun:[email protected]/spaces/autoevaluate/model-evaluator main
|
README.md
CHANGED
@@ -8,7 +8,7 @@ sdk_version: 1.10.0
|
|
8 |
app_file: app.py
|
9 |
---
|
10 |
|
11 |
-
#
|
12 |
|
13 |
> Submit evaluation jobs to AutoTrain from the Hugging Face Hub
|
14 |
|
@@ -16,12 +16,28 @@ app_file: app.py
|
|
16 |
|
17 |
The table below shows which tasks are currently supported for evaluation in the AutoTrain backend:
|
18 |
|
19 |
-
| Task
|
20 |
-
|
21 |
-
| `binary_classification`
|
22 |
-
| `multi_class_classification`
|
23 |
-
| `multi_label_classification`
|
24 |
-
| `entity_extraction`
|
25 |
-
| `extractive_question_answering`
|
26 |
-
| `translation`
|
27 |
-
| `summarization`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
app_file: app.py
|
9 |
---
|
10 |
|
11 |
+
# Model Evaluator
|
12 |
|
13 |
> Submit evaluation jobs to AutoTrain from the Hugging Face Hub
|
14 |
|
|
|
16 |
|
17 |
The table below shows which tasks are currently supported for evaluation in the AutoTrain backend:
|
18 |
|
19 |
+
| Task | Supported |
|
20 |
+
|:-----------------------------------|:---------:|
|
21 |
+
| `binary_classification` | β
|
|
22 |
+
| `multi_class_classification` | β
|
|
23 |
+
| `multi_label_classification` | β |
|
24 |
+
| `entity_extraction` | β
|
|
25 |
+
| `extractive_question_answering` | β
|
|
26 |
+
| `translation` | β
|
|
27 |
+
| `summarization` | β
|
|
28 |
+
| `image_binary_classification` | β
|
|
29 |
+
| `image_multi_class_classification` | β
|
|
30 |
+
|
31 |
+
## Installation
|
32 |
+
|
33 |
+
To run the application, first clone this repository and install the dependencies as follows:
|
34 |
+
|
35 |
+
```
|
36 |
+
pip install -r requirements.txt
|
37 |
+
```
|
38 |
+
|
39 |
+
Then spin up the application by running:
|
40 |
+
|
41 |
+
```
|
42 |
+
streamlit run app.py
|
43 |
+
```
|
app.py
CHANGED
@@ -161,7 +161,7 @@ is_valid_dataset = http_get(
|
|
161 |
if is_valid_dataset["valid"] is False:
|
162 |
st.error(
|
163 |
"""The dataset you selected is not currently supported. Open a \
|
164 |
-
[discussion](https://huggingface.co/spaces/autoevaluate/
|
165 |
)
|
166 |
|
167 |
metadata = get_metadata(selected_dataset)
|
@@ -176,7 +176,7 @@ with st.expander("Advanced configuration"):
|
|
176 |
SUPPORTED_TASKS,
|
177 |
index=SUPPORTED_TASKS.index(metadata[0]["task_id"]) if metadata is not None else 0,
|
178 |
help="""Don't see your favourite task here? Open a \
|
179 |
-
[discussion](https://huggingface.co/spaces/autoevaluate/
|
180 |
)
|
181 |
# Select config
|
182 |
configs = get_dataset_config_names(selected_dataset)
|
|
|
161 |
if is_valid_dataset["valid"] is False:
|
162 |
st.error(
|
163 |
"""The dataset you selected is not currently supported. Open a \
|
164 |
+
[discussion](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions) for support."""
|
165 |
)
|
166 |
|
167 |
metadata = get_metadata(selected_dataset)
|
|
|
176 |
SUPPORTED_TASKS,
|
177 |
index=SUPPORTED_TASKS.index(metadata[0]["task_id"]) if metadata is not None else 0,
|
178 |
help="""Don't see your favourite task here? Open a \
|
179 |
+
[discussion](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions) to request it!""",
|
180 |
)
|
181 |
# Select config
|
182 |
configs = get_dataset_config_names(selected_dataset)
|