Update the model type to make it compatible with mlx-lm's model mapping.
Once the model type is updated, it should be ready to be ported into mlx-lm and able to be lora fine-tuned with gate.
Hello, not sure it's that easy. "phi-msft" is also used in configuration_phi.py for example. Have you tested it?
I haven't tested it, but the model type "phi-msft" is for phi series models (and it has changed to "phi" in the official model repository "microsoft/phi-2"), not for merged models anyway. Since this model configuration uses auto mapping, it loads as a custom model and the specific model type doesn't really matter. I quickly checked the transformer code and it seems that for natively supported models, it uses config.architectures
to load the model class.
Happy to merge it if you can test that this change doesn't break non-mlx configurations.
@mlabonne Sorry for the late reply. I just did a local test and it worked fine on my local machine (4090, load_in_4bit). The mode_type doesn't really affect the custom models. By the way, since Microsoft updated phi2 to use hf format, would you mind re-merging the model using HF format? I can help with porting it in mlx. This will make implementing lora in MLX easier due to using standard attention layer naming convention.