Spaces:
Running
on
CPU Upgrade
Doesn't work?
It just says "Preparing Space" endlessly? Is there a static version somewhere?
same
same
Same
same
It has already happened since yesterday, at least. I still wait for it until now. (20230502)
Sorry! A simple restart fixed it, no idea what the issue was.
For future reference, you can run the leaderboard locally via:
git clone https://huggingface.co/spaces/mteb/leaderboard
# pip install gradio huggingface-hub pandas datasets
python leaderboard/app.py
How do you add the results from your model to be displayed?
How do you add the results from your model to be displayed?
https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md
How do you add the results from your model to be displayed?
https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md
I am running the leaderboard locally and doing as shown in the above link doesn't work.
How do you add the results from your model to be displayed?
https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md
I am running the leaderboard locally and doing as shown in the above link doesn't work.
That's odd; It should work even locally. Can you share the model where you have added the metadata? Maybe there is a mistake in the metadata.
Otherwise, there is the option of adding the results via PR here: https://huggingface.co/datasets/mteb/results
The model is also local, not on hugging face. Is there no way to see your results locally? I just want to see the average as it's unclear to me how it's calculated as I see different kinds of metrics.
If you just want to compute the average for MTEB, it is just a regular average across the 56 datasets, you can e.g. use this script: https://github.com/embeddings-benchmark/mteb/pull/858/files
To run the leaderboard locally with your own results, you need to
- Clone https://huggingface.co/datasets/mteb/results
- Put your results in there and update the paths.json file as explained in the
results.py
file - Edit the
app.py
code of the leaderboard to instead point to you local clone of theresults
repo - Add your model configs in the leaderboard yaml files
- Run the leaderboard & your model should show up
@Muennighoff seems like we might want to create some CLI for computing averages across benchmarks - @daniwes if this is something you would be interested in feel free to open an issue on the github