Xin Liu
Update
8044ba2
|
raw
history blame
4.93 kB
metadata
base_model: codellama/CodeLlama-70b-Instruct-hf
license: llama2
model_creator: codellama
model_name: CodeLlama 70B Instruct
pipeline_tag: text-generation
quantized_by: Second State Inc.
language:
  - code
tags:
  - llama-2

CodeLlama-70b-Instruct-hf-GGUF

Original Model

codellama/CodeLlama-70b-Instruct-hf

Run with LlamaEdge

  • LlamaEdge version: coming soon

  • Prompt template

    • Prompt type: codellama-super-instruct

    • Prompt string

      <s>Source: system\n\n {system_prompt} <step> Source: user\n\n {user_message_1} <step> Source: assistant\n\n {ai_message_1} <step> Source: user\n\n {user_message_2} <step> Source: assistant\nDestination: user\n\n
      
    • Reverse prompt: <step> Source: assistant\nEOT: true

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:CodeLlama-70b-Instruct-hf-Q2_K.gguf llama-api-server.wasm -p codellama-super-instruct -c 1024 --reverse-prompt 'Source: assistant\nEOT: true'
    

    Note that the model only works in the non-streaming mode.

Quantized GGUF Models

Name Quant method Bits Size Use case
CodeLlama-70b-Instruct-hf-Q2_K.gguf Q2_K 2 25.5 GB smallest, significant quality loss - not recommended for most purposes
CodeLlama-70b-Instruct-hf-Q3_K_L.gguf Q3_K_L 3 36.1 GB small, substantial quality loss
CodeLlama-70b-Instruct-hf-Q3_K_M.gguf Q3_K_M 3 33.3 GB very small, high quality loss
CodeLlama-70b-Instruct-hf-Q3_K_S.gguf Q3_K_S 3 29.9 GB very small, high quality loss
CodeLlama-70b-Instruct-hf-Q5_K_M.gguf Q5_K_M 5 48.8 GB large, very low quality loss - recommended