phi-2-coder / README.md
osanseviero's picture
Add right library
502b213 verified
|
raw
history blame
4.85 kB
metadata
tags:
  - generated_from_trainer
  - code
  - coding
  - phi-2
  - phi2
  - mlx
model-index:
  - name: phi-2-coder
    results: []
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
  - code
thumbnail: https://huggingface.co/mrm8488/phi-2-coder/resolve/main/phi-2-coder-logo.png
datasets:
  - HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
library_name: transformers
phi-2 coder logo

Phi-2 Coder πŸ‘©β€πŸ’»

Phi-2 fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library.

Model description 🧠

Phi-2

Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.

Training and evaluation data πŸ“š

CodeAlpaca_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model.

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 66
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
0.7631 0.36 50 0.7174
0.6735 0.71 100 0.6949
0.696 1.07 150 0.6893
0.7861 1.42 200 0.6875
0.7346 1.78 250 0.6867

HumanEval results πŸ“Š

WIP

Example of usage πŸ‘©β€πŸ’»

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mrm8488/phi-2-coder"

tokenizer = AutoTokenizer.from_pretrained(model_id, add_bos_token=True, trust_remote_code=True, use_fast=False)

model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, device="auto")

def generate(
        instruction,
        max_new_tokens=128,
        temperature=0.1,
        top_p=0.75,
        top_k=40,
        num_beams=2,
        **kwargs,
):
    prompt = "Instruct: " + instruction + "\nOutput:"
    print(prompt)
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"].to("cuda")
    attention_mask = inputs["attention_mask"].to("cuda")
  
    with torch.no_grad():
        generation_output = model.generate(
            input_ids=input_ids,
            attention_mask=attention_mask,
            max_new_tokens=max_new_tokens,
            eos_token_id = tokenizer.eos_token_id,
            use_cache=True,
            early_stopping=True
        )
    output = tokenizer.decode(generation_output[0])
    return output.split("\nOutput:")[1].lstrip("\n")

instruction = "Design a class for representing a person in Python."
print(generate(instruction))

How to use with MLX.

# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git

# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir phi-2-coder mrm8488/phi-2-coder

# Run example
python mlx-examples/llms/phi2.py --model-path phi-2-coder --prompt "Design a class for representing a person in Python" 

Citation

@misc {manuel_romero_2023,
    author       = { {Manuel Romero} },
    title        = { phi-2-coder (Revision 4ae69ae) },
    year         = 2023,
    url          = { https://huggingface.co/mrm8488/phi-2-coder },
    doi          = { 10.57967/hf/1518 },
    publisher    = { Hugging Face }
}