Josephgflowers's picture
Update README.md
6345a4e verified
metadata
license: mit
tags:
  - generated_from_trainer
base_model: Josephgflowers/TinyLlama-Cinder-Tiny-Agent
model-index:
  - name: TinyLlama-Cinder-Agent-v1
    results: []

The goal of this Model is to build a Tinyllama model that can be used for tool usage, RAG, take system instructions, and as a general assistant.

This model is a fine-tuned version of Josephgflowers/TinyLlama-Cinder-Tiny-Agent.

Special Thanks to https://nationtech.io/ for their generous sponorship in training this model.

image/png

This model is a fine-tuned version of Josephgflowers/TinyLlama-3T-Cinder-v1.2 on https://huggingface.co/datasets/Josephgflowers/agent_1.

Model description

This models is trained for RAG, Summary, Function Calling and Tool usage. Trained off of Cinder. Cinder is a chatbot designed for chat about STEM topics, space adventure RP and storytelling. This model does well at the IFEval (following instuctions) for its size. It is great at summary and RAG. Due to the formatting of the Glaive function calling dataset the JSON output is not what I was expecting for doing regular JSON dumps but does follow their standard strictly.


10 X the original tinyllama model on GSM8K!!!


To do this I started with all the normal open math datasets EG Orca Math, All the Meta Math, camel AI math qa, ect and as many reasoning datasets as I could make or find. But what really made it go the extra mile was adding in TIGER-Lab/WebInstructSub along with all of the RAG and Summary data. So special thanks to TIGER-Lab. I found that as math perfomance improved so did the model's ability to handle extracting relevant data in RAG.

See https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-Rag/blob/main/tinyllama_agent_cinder_txtai-rag.py For usage example with wiki rag.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 39.17
AI2 Reasoning Challenge (25-Shot) 34.90
HellaSwag (10-Shot) 53.87
MMLU (5-Shot) 26.89
TruthfulQA (0-shot) 39.08
Winogrande (5-shot) 59.12
GSM8k (5-shot) 21.15

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 5.82
IFEval (0-Shot) 26.70
BBH (3-Shot) 3.80
MATH Lvl 5 (4-Shot) 0.38
GPQA (0-shot) 0.00
MuSR (0-shot) 2.23
MMLU-PRO (5-shot) 1.79