Edit model card

TitleOS/RocketZephyr-3b-GGUF

Quantized GGUF model files for RocketZephyr-3b from TitleOS

Name Quant method Size
rocketzephyr-3b.fp16.gguf fp16 5.59 GB
rocketzephyr-3b.q2_k.gguf q2_k 1.20 GB
rocketzephyr-3b.q3_k_m.gguf q3_k_m 1.39 GB
rocketzephyr-3b.q4_k_m.gguf q4_k_m 1.71 GB
rocketzephyr-3b.q5_k_m.gguf q5_k_m 1.99 GB
rocketzephyr-3b.q6_k.gguf q6_k 2.30 GB
rocketzephyr-3b.q8_0.gguf q8_0 2.97 GB

Original Model Card:

RocketZephyr-3B

An attempt at creating a 3B parameter merged model using StabilityAI's Zephyr-3b and Pansophic's Rocket-3B.

Repo Description

This repo serves as an housing for my experimentation with creating a 3 billion parameter merged model using mergekit

  • Merged Models: stabilityai/stablelm-zephyr-3b & pansophic/rocket-3B
  • Merge Weights: 1.0 & 0.3
  • Merge Method: Linear
  • License: I believe this (current) model inherits the Stable Non-Commercial Research Community License Agreement and as such is licensed by it as well.
Downloads last month
25
GGUF
Model size
2.8B params
Architecture
stablelm

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/RocketZephyr-3b-GGUF

Quantized
(1)
this model