Edit model card

LocutusqueXFelladrin-TinyMistral248M-Instruct

This model was created by merging Locutusque/TinyMistral-248M-Instruct and Felladrin/TinyMistral-248M-SFT-v4 using mergekit. After the two models were merged, the resulting model was further trained on ~20,000 examples on the Locutusque/inst_mix_v2_top_100k at a low learning rate to further normalize weights. The following is the YAML config used to merge:

models:
  - model: Felladrin/TinyMistral-248M-SFT-v4
    parameters:
      weight: 0.5
  - model: Locutusque/TinyMistral-248M-Instruct
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16

The resulting model combines the best of both worlds. With Locutusque/TinyMistral-248M-Instruct's coding capabilities and reasoning skills, and Felladrin/TinyMistral-248M-SFT-v4's low hallucination and instruction-following capabilities. The resulting model has an incredible performance considering its size.

Evaluation

Found in the Open LLM Leaderboard.

Downloads last month
1,310
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct

Quantizations
1 model

Dataset used to train Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct