Edit model card

ATMa

Asymmetrically Tuned Matrix

This model is a very mid finetune of microsoft/Phi-3-medium-128k-instruct

Layers 1 through 15 were finetuned on one private dataset and then a LoRA of a different but similar and larger dataset was trained/applied to the entire model with a scaling factor of 1:4.

The results are mixed and it's hard to find a good use-case for this model.

All of the original scripts and code have been included in this repo.

Trained using qlora-pipe

Downloads last month
9
Safetensors
Model size
14B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.