Model Performance review
#7
by
HR1777
- opened
I downloaded a model and conducted various tests on it. This particular model outperforms the Nous-Hermes-2-Mixtral-8x7B-SFT-Aplaca model, and it excels in question answering, likely due to its utilization of the Wikihow database. However, it doesn't perform well with lengthy inputs, similar to other models I have tested.
In general, this model is good, but it can be improved. I suspect that models require DPO to function well, but I am unsure about the necessary steps for optimizing their performance with lengthy inputs.
Thank you so much again for developing great models.