Tora based Models
Collection
3 items
•
Updated
•
1
The speechless-tools-7b model is fine-tuned on speechless-coding-7b-16k-tora, following the guidance of the ToolLlama project, aims to empower open-source LLMs with the ability to handle thousands of diverse real-world APIs.
speechless-tools-7b-dfs vs chatgpt-cot
Dataset | Win Rate |
---|---|
G1_instruction | 0.465 |
G1_category | 0.495 |
G1_tool | 0.505 |
G2_instruction | 0.61 |
G2_category | 0.585 |
G3_instruction | 0.66 |
speechless-tools-7b-dfs vs toolllama-dfs
Dataset | Win Rate |
---|---|
G1_instruction | 0.45 |
G1_category | 0.45 |
G1_tool | 0.51 |
G2_instruction | 0.53 |
G2_category | 0.575 |
G3_instruction | 0.46 |