jpacifico's picture
Create README.md
3331761 verified
|
raw
history blame
984 Bytes
metadata
library_name: transformers
license: mit
language:
  - fr
datasets:
  - jpacifico/French-Alpaca-dataset-Instruct-110K
tags:
  - phi3
  - french-alpaca
  - gguf
  - quantized
  - edgeai

French-Alpaca-Phi-3-GGUF

French-Alpaca is a 3B params LLM model based microsoft/Phi-3-mini-4k-instruct ,
fine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo.
The fine-tuning method is inspired from https://crfm.stanford.edu/2023/03/13/alpaca.html

This quantized f16 GGUF version can be used on a CPU device, compatible with llama.cpp and LM Studio.

Limitations

The French-Alpaca models family is a quick demonstration that a small LM ( < 8B params )
can be easily fine-tuned to specialize in a particular language. It does not have any moderation mechanisms.

  • Developed by: Jonathan Pacifico, 2024
  • Model type: LLM
  • Language(s) (NLP): French
  • License: MIT
  • Finetuned from model : microsoft/Phi-3-mini-4k-instruct