Spaces:
Running
Running
Mihail Yonchev
commited on
Commit
β’
d799cb2
1
Parent(s):
bfe2440
feat: add FAQ
Browse files
app.py
CHANGED
@@ -309,7 +309,22 @@ with demo:
|
|
309 |
],
|
310 |
submission_result,
|
311 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
312 |
|
|
|
313 |
with gr.Row():
|
314 |
with gr.Accordion("π Citation", open=False):
|
315 |
citation_button = gr.Textbox(
|
|
|
309 |
],
|
310 |
submission_result,
|
311 |
)
|
312 |
+
with gr.Row():
|
313 |
+
with gr.Accordion("π FAQ", open=False):
|
314 |
+
with gr.Column(min_width=250):
|
315 |
+
gr.Markdown("""
|
316 |
+
#### What does N/A score mean?
|
317 |
+
|
318 |
+
An N/A score means that it was not possible to evaluate the benchmark for a given model.
|
319 |
+
|
320 |
+
This can happen for multiple reasons, such as:
|
321 |
+
|
322 |
+
- The benchmark requires access to model logits, but the model API doesn't provide them (or only provides them for specific strings),
|
323 |
+
- The model API refuses to provide any answer,
|
324 |
+
- We do not have access to the training data.
|
325 |
+
|
326 |
|
327 |
+
""")
|
328 |
with gr.Row():
|
329 |
with gr.Accordion("π Citation", open=False):
|
330 |
citation_button = gr.Textbox(
|