Edit model card

GGUF!

This is the gguf version of Walmart-the-bag/Llama-3-LizardCoder-8B. It contains every quant available.

Model Card

image/png

Llama-3-LizardCoder-8B

This is a merge of 6 models that were finetuned on llama3 8b. This has done pretty decent on some coding tasks, for the parameter size.

gguf

Limitations

  • Uncertain Accuracy: As a merged model, the model's responses may not always be accurate. Users should independently verify any outputs before relying on them.
  • Potential for Censorship: The model's censorship filters are not comprehensive. There is a possibility of encountering censored code/content.
  • Not including packages: If you ask it to code you something, it may accidentally forget to include a package. Tell it to, and create a good prompt. This will be finetuned on to fix it in the future.

Merge Config

This model was made possible by this merge yaml.

models:
  - model: rombodawg/Llama-3-8B-Instruct-Coder
    parameters:
      weight: 1.0
  - model: ajibawa-2023/Code-Llama-3-8B
    parameters:
      weight: 0.3
  - model: meta-llama/Meta-Llama-3-8B-Instruct
    parameters:
      weight: 0.5
  - model: Orenguteng/Llama-3-8B-Lexi-Uncensored
    parameters:
      weight: 0.8
  - model: TheSkullery/llama-3-cat-8b-instruct-v1
    parameters:
      weight: 0.9
  - model: McGill-NLP/Llama-3-8B-Web
    parameters:
      weight: 0.2

merge_method: linear
dtype: bfloat16

License

i dont really care about this, but here: Llama3

Downloads last month
541
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .