This is not an official SOLAR Model from Upstage, it's just my attempt at a recreation of these powerful models using the new Llama-3
Further Testing Coming Soon
Where the SOLAR Model used a mix of these datasets:
- c-s-ale/alpaca-gpt4-data (SFT)
- Open-Orca/OpenOrca (SFT)
- in-house generated data utilizing Metamath [2] (SFT, DPO)
- Intel/orca_dpo_pairs (DPO)
- allenai/ultrafeedback_binarized_cleaned (DPO)
I Used:
- llm-wizard/alpaca-gpt4-data
- Crystalcareai/slimorca-dedup-alpaca-100k
- meta-math/MetaMathQA
- (DPO Datasets Coming Soon)
More Info:
- Developed by: cookinai
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Special Thanks to Upstage's SOLAR Project for the inspiration behind this model
- Downloads last month
- 22
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for cookinai/Llama-3-SOLAR-v0.1
Base model
meta-llama/Meta-Llama-3-8B
Quantized
unsloth/llama-3-8b-bnb-4bit