was there a purpose to loading the model in 8 bits, comparing it to 32b llama and then saying the comparison was inaccurate?

#3
by Alignment-Lab-AI - opened

Sorry, did we write anything about the inaccuracy of the comparison anywhere?

If you are talking about this:

CleanShot 2023-05-10 at 14.54.00@2x.png

Then, we miss the comparison with base Llama models loaded in 8bits. I didn't take the time to do it when publishing the models.
If you have insights about that benchmark, open a PR and add them to the benchmark table.

Otherwise, did you try the model?

chainyo changed discussion status to closed

Sign up or log in to comment